The AI revolution is happening and the trend is continuing. Believing that this will stop, is a false premise.
Everything is gonna change as we are entering into a new era.
The era of research. Thanks to AI, we will go upward the layer of abstraction, creating more jobs, and making new commitments for humans as we did so far in history1. The AI models will start getting better and better as we automate more and more of our tasks, starting by the code which is almost an entirely solve problem now. But before everything gets automated out of their economic value, there is research.
Research is mainly about “taste”, the “good judgment ability”, or whatever people like to call it. it’s a philosophical question that goes along defining what beauty is. I define it as the skill of being able to choose a long horizon goal, making something for the future. Another formulation would be the faculty to grasp the reality of the present so well, that one can understand and thus predict the future. the fact that multiple grand discovery have been discovered in different places at the same time somehow hint that way, if something must be discovered, the technology egregor will make it discoverable. it’s a researcher job to be ready and looking for those.
Thanks to the large deployment of the internet, empirical evidence of ideas can circulate freely, we can automate the creation of them. having an ai model, experimenting something on your computer communicating with others model running other (complementary?) experiments on other people computer, while sharing both advancement, mixing the style, taste and direction of the humans researchers to form one or many solutions.
one question is left open tho, would the ai be able to access that egregor and make significant discoveries on its own by iterating on them, or is it part of the realm of ideas where only a human (physical?) brain can connect to.
In the mean time, everything is going at a faster pace, i’ve reach my take-off point, the forefront of discoveries is raveling before my eyes, one findings after another. Barrier like code or the human circadian cycle are being destroyed by AI models. What took month or years to just build the infrastructure, is currently being experiments automatically in a dark2 lab. this is going so fast, that it becomes really hard to take a step back to grasp the magnitude and implications of it all.
So what’s left? For the time being, any individual, any human zooid, can and must have a long long long horizon perspective/goal. by long horizon goal, i mean not something that can be view at one person scale, but rather at the humanity level, ie. making humanity a multi-planetary species, cure diseases, engineering earth self medication, basically something that would “impress” an extra-terrestrial being coming on earth. Something that is transcendant, something that will make history (for the good reason), it’s bigger than oneself or at least expect to be.
there is a ludicrous number of jobs in those places with the right kind of long horizon goal. I believe AI will allow or unlock us to move to that direction, moving money making goals to human expenditure ones. we are going past the (sole?) economic value of things.
Again, every human being should have his own long horizon perspective, if one cannot find his own, we can always aspire to the perspective of someone else and plug into it, making it our own. It’s not about oneself anymore it’s about humanity as a whole.
with the recent advancement in AI, there is three scenarios that are possible. Either we go a layer of abstraction higher and more jobs than we did before are created. needing even more humans than ever before, society flourishing. The second one will be that nothing ever happened and we have just some things that work well but doesn’t elevated us as much as promised or just like it was promised, continuity of the spreadsheet, it’s just a factor that will decrease over time without much change. The third one is everyone is replaced, the AI goes rogue or seek an abstract unrelated goal of ours and exterminate all of us. for me the odds of all of that happening are : 50/10/40. and for the timeline, i believe we will get significant direction by end 2027 debut 2028, thanks to new data-centers being up and running, until then it’s pure speculation.
during that time, one can focus on the research side of things, always pushing to the edge of novelty. harnessing new tools and intelligence for the greater good will always be rewarding, paving the way for future generations building on the work of others is now available to everyone.
i am aware that everyone will not necessarily wants to do the research thingy, or at least not all the time. Thus with a longarithm mission, we can still move the cursor toward our goal in meaningful ways, via execution and operations that don’t necessitate hard cognition task. with enough data or place for data collection we could retribute recognition to the work people do to advance the mission.
as an example, going to the moon in 1969 must have had all kinds of people, and i believe all of them were proud to be part in any way to that achievement. being the person, that connect cables to a sources, the people that made the alloy or the one that tied the suits. for a moment, these people felt a purpose that someone doing the same job without one never would.
people can do whatever they want, the purpose will be their calling.