Technofeudalism is the notion that we serve our big tech overlords (Amazon, Google, Apple and Meta) by handing over data to access their cloud space.
Technofeudalism suggests our preferences are no longer our own, they’re manufactured by machine networks — commonly known as the cloud. It’s underpinned by the theory that the cloud has created a feedback loop that removes our agency. We train the algorithm to find what we like and then the algorithm trains us to like what it offers.
This applies to many situations such as retailing, social media and simple things like the background image that Windows shows on startup.
What you have seen for the last few weeks will be what you see this week and next week. The people you have contacted will be the people you contact in future. Whatever you bought or studied will be the product or information you are offered next. Your browser will create icons/links for what it thinks you should see and in some cases there is no easy way to change that.
Extremism is when views diverge so much and become so isolated that different tribes not only have their own opinions and attitudes but their own facts. If you read only what the cloud determines and interact only with like minds that the cloud points you towards how will you ever understand the ‘other’? You must not only obey Big Brother but you must love Him. If only Elon was a little more lovable.
So you think the algorithm is in charge and you have less real choice? Wait until the scope, depth and intensity of the use of AI in controlling your view of world are developed further. You ain’t seen nuthin yet.
AI has been a hot topic since ChatGPT was released from its cage, but I don’t think what we have to date can really be called AI in any sensible way.
Yes, I have played with ChatGPT. Yes, it is interesting and it produces some very intriguing output - but it has not produced anything novel or new. It has a stack of data at the back end, and some complex algorithms to review the input and produce output. That is not intelligence! Call it AI when it produces something unique - not when it produces a court brief that cites non-existent cases!
I suspect that we remain a long way from real artificial intelligence whose output is not so dependent upon human input but can actually invent/create rather than copy/merge. Real AI cannot rely upon algorithms alone, no matter how complex, because an algorithm must by its nature produce the same answer to the same input every time. I assume tools such as ChatGPT and Bard use some random input generator to deal with this, but that remains a long way from being ‘creative’.
What we call AI today will not be the same as what we will call AI in 20 years. Along the way the comparatively simple software that controls the material we are served from the cloud will become more complex and behave differently. I am saying whatever that new software may be and whatever we call it the scope and character of what it does will expand and change remarkably.
Maybe the question of what is meant by AI deserves its own thread.
I would rather address the broader consequences of software determining more and more of human experience through controlling exposure to knowledge and opinion. What you call that trend and how the software works is a side issue.