AI can handle tasks twice as hard every few months. What does this exponential growth mean for how we use it?

A new measure of AI performance could give us an idea of when to expect truly general AI agents. (Image credit: MASTER via Getty Images)

Researchers have developed an innovative method for assessing the capabilities of artificial intelligence (AI) systems – how quickly they can outperform or compete with humans at solving complex problems.

While AI can generally outperform humans in tasks involving text prediction and information processing, they are less effective when used on larger projects such as remote management support.

To quantify this performance boost in AI models, a new study has proposed evaluating AIs based on how long they can complete tasks, compared to how long it takes humans to do the same. The researchers published their findings on March 30 in the preprint database arXiv, so they have not yet been peer-reviewed.

You may like

Sourse: www.livescience.com

Leave a Reply

Your email address will not be published. Required fields are marked *