How artificial intelligence becomes trustworthy
Until a few years ago, the use of AI algorithms always had something of a “black box”: fed with data, they spat out results that were, however, difficult to comprehend and verify. Researchers at the ETH AI Center are therefore developing methods to make artificial intelligence comprehensible, verifiable and thus trustworthy.
In March 2023, Geoffrey Hinton and Yuval Noah Harari, together with many other scientists, warned that artificial intelligence could herald the end of humanity. They demanded that the further development of so-called large language models be paused for at least six months.
Large language models are those AI models that are fed vast amounts of text and learn from it the ability to create texts that are difficult to distinguish from those written by humans. ChatGPT is one of the big language models. Is AI really that dangerous? So where do the dangers lurk with the use of AI?
Come to our booth and learn what we at the ETH AI Center mean by trustworthy AI.
You will also have the opportunity to experience AI in action: In a game reminiscent of the “Lunar Lander” from the arcade era, you will land a drone first without, then with AI support, and experience how humans and machines together achieve better results than either alone.