Libove Blog

Personal Blog about anything - mostly programming, cooking and random thoughts

Link:Situational Awareness - The Decade Ahead


Thesis arguing that #AGI will be reached in the next decade. Have not yet read the full text.

Problems I see with the argumentation so far:

The Data Wall

The data wall is an (hard) open problem. Larger models either will need massively more data or be orders of magnitude more efficient with the given data.

More data is not available at that scale. The latest models are already trained on virtually the entire internet. They argue that a strategy similar to AlphaGo, where the model created it's own training data by playing against itself, could be deploy for #GenAI. I find this implausible as generating intelligent text beneficial in trying already requires a more capable AI.

Similarily being more efficient with data is still an open problem, but I don't know enough about this to evaluate how likely this is to happen.

Additionally, going forward, any new datasets will be poisoned by AI output as they are already used on massive scale to create "content". Research suggests that training on these data degrades the performance of models.

Large Scale Computing

Even is the data wall is broken scaling the models will need massive computing power consuming ever more resources and energy. This will only be tolerated by the general public as long as this is overall beneficial to them. There are already industries (mainly creative ones) being massively disrupted by the current generation of GenAI. Philosophy Tube in Here's What Ethical AI Really Means, shows how strikes and collective action can be tools to prevent the development of more powerful AI systems.