Unveiling the Truth Behind Bing's Chatbot Demonstration: Investigator Reveals Shocking Errors
Microsoft's search engine Bing's chatbot made serious errors in the demonstration it gave last week, according to an investigation by a researcher. The AI generated financial results based on a company's quarterly figures, which were incorrect.
During the demonstration, Microsoft compared the quarterly figures of Gap and Lululemon, but the AI displayed various incorrect values, including some figures that were not even present in the quarterly reports. Bing's other demonstrations also reportedly showed dubious results, including inaccurate information about bars in Mexico City, incorrect opening times, and forgetting to mention that certain bars were gay bars.
When comparing vacuum cleaners, Bing listed the URL of a different cordless version of the vacuum in question, making it unclear which version was being referred to.
Microsoft has acknowledged the errors, stating that the system is expected to make mistakes during this preview period, and that feedback is critical to improve the system. The chatbot technology used in Bing is from OpenAI, the creators of GPT-3 and ChatGPT.
In general, chatbots collect and summarize information with the help of algorithms, using language models trained on vast amounts of text to generate responses to user queries. However, the software does not have the ability to distinguish between facts and fiction.