![]() ![]() Negation remains an issue today and is one of the rare linguistic skills to not improve as the models increase in size and complexity. ![]() In fact, it became clear that the models could not actually distinguish between the two scenarios and provided the exact same responses (using nouns such as “bird”) in both cases. When asked to complete a short sentence, the model would answer 100 percent correctly for affirmative statements (“a robin is …”) and 100 percent incorrectly for negative statements (“a robin is not. The Google seizure error makes sense, given that one of the known vulnerabilities of LLMs is their failure to handle negation, as Allyson Ettinger demonstrated years ago with a simple study. However, when a user searched how to handle a seizure, they received answers promoting things they should not do-including being told inappropriately to “hold the person down” and “put something in the person’s mouth.” Anyone following the directives Google provided would thus be instructed to do exactly the opposite of what a medical professional would recommend, potentially resulting in death. Subscribe to WIRED and stay smart with more of your favorite Ideas writers.Īmong the most celebrated AI deployments is that of BERT-one of the first large language models developed by Google-to improve the company’s search engine results. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |