When might a Google search not be free in the future

It appears that Google may be contemplating making AI based Google searches paid for, normal Google searches are remaining free as far as it is known. How many of Google search users use the AI version of searches?

Microsoft have their own AI questions engine called Copilot (those who use Windows 11 may have noticed the appearance of the Copilot icon on their taskbar). Are all these AI searches possibly going to head down a fee for service road?

How would you feel about paying for searches, as most of Google search is paid for by the advertising that occurs and often offers sponsored results in search results?

An article about the rumoured change

2 Likes

I don’t use Google search so am not particularly concerned.

That said, I suspect that Google will back away from this idea once it realises that people can easily change search providers. Search is not ‘sticky’ in the same way that something like email is; it is very easy to switch to another search engine, at which point Google no longer even gets ad revenue.

If there is a real value proposition buried somewhere in here, it has yet to be articulated by Google or anyone else. I cannot see search being the AI ‘killer app’.

This feels like Google testing the waters and seeing how customers are likely to react.

3 Likes

Their Gemini has free and paid versions and the Gemini Advanced (AI assisted) version is even more expensive than the Business model. Gemini is a lot like Copilot. I don’t see the step to paid services being that hard or even that many rejecting it. As I noted Google search (the normal one many use), is very likely to remain “free” as the ad revenue will continue to be a good revenue maker for Google. I also think many search engines rely somewhat at least on, and others very heavily on, the Google search results. So we may avoid the Google capture of our data by using alternative search tools but still it is underneath much of it, a necessary tool to get answers.

I wouldn’t expect Google search to anytime soon become a subscription service.

Searching and indexing the Internet, to give results to a search, is not the same as ‘generative’ AI systems.

The two should not be confused. GAI is an application that may, or may not use information derived from Internet searches to form part of the training database.

1 Like

GAI references were about how Google has in the past had layers from free to some cost (and more cost when AI assist is packaged in). Notably GAI is traded on the Crypto markets Google AI price today, GAI to USD live price, marketcap and chart | CoinMarketCap. GAI can and is used for internet searches, it is much more than just this function though. It is estimated that searches using AI tools are about 10 times more expensive than non AI assisted ones. There is no reason that Google may not charge a fee for AI assisted products, particularly as this time having some insider disclosure suggesting the likelihood. If the reports are true then the AI assist will be a pay for use one when searches are conducted using it. I think some people would be happy to pay, more so if it came Ad free as part of that payment plan. I don’t know that an AI assisted search would be Ad free as Ads generate a decent income for Google plus they add more to the data history they have on users as a click on an Ad is information sent to Google.

1 Like

For a long time Idon’t use Google search, since there is lots of spyware involved.
I use Duckduckgo free but saver!

This.

I think the point is that, to the average punter, they just typed a question into the search box. They are unaware of the different types of technology that, behind the scenes, on the server, may be used to produce an answer.

If Google chopped and changed as to what technology is used, you might not even be able to tell.


I would be concerned about a company (let’s say not Google because I try to minimise my touchpoints with Google globally) that quietly inserted AI as the search engine, not because I am afraid of AI per se, but because it is potentially unreliable if using a Large Language Model (LLM) but without ever understanding either the question or the answer in any meaningful sense. However this is a debated perspective.

In other words, in my opinion, a company should be charging less, not more, for offering answers via AI. :wink:

2 Likes

I haven’t used Google AI search but if it’s anything like Meta’s Llama 3 AI it doesn’t know much about anything that happened post-2020 unless you badger it. So no, I wouldn’t pay for such poor results.

True.

However is some kind of AI going to find what you want better and quicker than the basic Google?

If you spend much time trying to supply people with information you inevitably come up with the question:

“Ought I be answering the question as asked, or should I try to find out what they really want?”

In my experience it took quite a bit of of work in each case to decide which and then, if the answer was yes, to do the second part.

Most people without any training are rather poor at stating what their problem is or asking the right questions about solving it.

Can AI do any better at that than humans or do it at all?

Well not really a question. Just a list of words to be matched to produce possible results. Google is using some smarts to present better results first, using the Pagerank algorithm, and also relevence and context information to customise the result to you personally. Such as your location, and search history.

I suppose one could consider this to be some sort of AI. Just as some could consider Siri and Alexa on devices AI. Or voice searching in general.

But it is all just keyword matching plus filtering through prior results and context.

This Generative AI goes further. It is for those who don’t just want a list of information to be then investigated, they want the system to generate the whole answer.

And there lies the problem.

If you get a list of hits for your search terms you can go through them and understand what they say with some effort, also you can choose which sources are relevant and reliable and draw your own conclusions.

If the AI presents you with a little essay on the topic (in perfect prose) you have to trust what it tells you.

Can AI produce a better list of hits that are ranked in the best order for understanding the topic without hallucinating or producing rubbish?

2 Likes

Out of ChatGPT’s obviously unbiased highly trained ‘mouth’.

Absolutely! AI can generate lists of hits or recommendations based on various criteria such as relevance, credibility, and comprehensiveness without resorting to hallucinations or generating rubbish. However, the quality of the list depends on the quality of the data it’s trained on and the algorithms used.

To produce a better list of hits, AI typically employs techniques like natural language processing (NLP), machine learning, and deep learning. These methods can analyze vast amounts of text data to understand the context, relevance, and credibility of the information.

Additionally, AI models can be fine-tuned to specific tasks or domains, enabling them to generate more accurate and relevant lists of hits tailored to a particular topic or subject matter.

However, it’s essential to keep in mind that while AI can assist in generating lists, human oversight and judgment are crucial to ensuring the quality and accuracy of the results. Humans can review and validate the generated lists to ensure they meet the desired criteria and are free from errors or biases.

My interpretation is that ChatGPT is providing data as well as information because it can, not because there is always intrinsic value added. One can make a list and sift through it, or AI can make a list and one can sift through it. Deuce.

3 Likes

If we had to pay for the AI option would we be even remotely happy with a inaccurate result being provided to us? While it is free we might accept we have to filter the results for the more true outcome, but when we pay for a service we are entitled under ACL to a decent outcome and if it is found to be not to a satisfactory standard we can seek recompense.

How will AI generated answers to our searches be held to the ACL standards? Will we know when the service has failed?

Just as @syncretic found with their tests, the results can look very convincing. Without a very solid knowledge of the subject we can easily be led astray and accept the answers as being correct, thus not even challenging the paid for search results.

If the searches did become paid for are we going to the test subjects to improve the product (paying for the privilege) rather than users receiving results from a proven and accurate system?

2 Likes

The results you get from search engines are already biased, the rankings are routinely gamed for commercial or political purposes. Lists can be thousands long so those at the tail are effectively invisible and those at the top will be read. If I was going to pay for a search engine, AI driven or not, I would need the ranking formula to ensure relevance dominated SEOs’ manipulation. Perhaps if the advertising revenue from the would be manipulators was no longer such an issue that might actually happen.

E-novels have been created as a separate specie, they are not just novels distributed in soft copy. In doing so some publishers at least (Amazon) have given up all pretence of the traditional role of book publisher. The book cover, layout and proofing are not their responsibility. Many paid publications are riddled with errors. They now allow the reader to report these errors, which is much cheaper than paying for an experienced human copy editor. Typically homophones and near homophones are the last to be fixed ( as a spell checker cannot find them) so years after publication we still have:

“
 the people and their pest were removed to a safer location 
”

“
 the team descended into the bowls of the earth 
”

Some software houses have a long tradition of public software beta releases (eg Microsoft). I think it an excellent chance that a subscription search engine team would be contemplating the same. The question is if you don’t know exactly why an AI produced nonsense how do you fix it? The generic disclaimers about the limits of the scope of training data generating errors seem to me, at least in part, to be a cloak for real ignorance.

This article from the ABC tells us that AI generated images can be picked by silly composition errors, such as asymmetry of ear rings. The problem for the system builders is there is no way to reach in and tell the software to ensure that humans have the same ring in each ear, unlike fixing typos, there is no place to apply specific corrections.

It gets worse, altering “bowls” to “bowels” is pretty certain to be correct if a human decides from context that is required. But if such a fix was to be applied to asymmetric ear rings for those who according to fashion wear only one, would it be an improvement?

1 Like