An unwelcome and unexpected "update" to various windows workstations at my lab resulted in them being equipped with "Microsoft CoPilot", a new "feature" with 3 overt drawbacks - (a) it clearly sends things to Microsoft, so it’s a privacy/security issue for workstations were proprietary work is done; (b) it’s a wannabe Chat-GPT, so its going to give the appearance of having authoritative information when it is merely quoting a random reddit post; (c) it uses the "bing" search engine, so it only has access to a fraction of the sources of data that it might otherwise have.
So, if this malware infected your PC, pull up Taskbar Settings, and disable "CoPilot". I am not sure it can be removed entirely, but I would wager than uninstalling the "Edge" web browser would help.
But the attempt to compare Chat-GPT-like "AI (Natural-Language Predictive Text)" with a "Neural Network" models, such as the bee-counting example recently cited conflates and confuses enough distinct issues to prompt me to attempt to clarify.
The current "Chat-GPT" models are nothing more than a voice front end to what can be thought of as a "web browser" or a database of things found on the web. Its answers are no better than the data it could find on the web. So, if you ask it what time the post office closes, it will likely give you a correct answer, as no one would have any reason to create anything on the web that would misrepresent the closing time of the post office.
In contrast, a "neural network" is also "trained", but it usually is trained in one very narrow and specific area, such as subway routes or ways to fold proteins. Once trained, it is nearly impossible to audit how it arrived at a "decision", so it can silently do unpredictable things.
Back to the Chat-GPT stuff, a subject more contentious or susceptible to exploitation might have multiple "answers" vying for dominance in the dataset, creating opportunities for confusion.
So, if I ask an AI "Where can I get some good tacos in the East Village [of Manhattan, NYC]?", the AI might reply with Google's top 3 results for "tacos" in that area (Taquereia, Emperor Al Pastor, and El Diablito) and miss the clear favorite of the indigenous natives, the tiny "La Palapa", on St Marks Place. Their food is good enough that they don't have to advertise, so the inferior restaurants have to invest in "Search Engine Optimization" and fake Yelp reviews to make themselves appear popular and get customers.
Therefore, "Truth" with a capital "T", such as the truly best tacos, cannot ever be found these days by using the internet, as the best tacos are good enough to not "need" self-promotion, and the internet only "knows" about those who self-promote. A true "AI" would use real-time data to notice the line of people, (all using their cellphones of course) waiting outside La Palapa, but that's beyond the current abilities of the plainly deficient "AI" products currently available. But note that one could ask any teenager in the East Village the taco question, and very likely get the correct answer.
And no one has thought though what happens when an AI collects, uses, and trusts data created by ANOTHER AI? That will certainly be fun, won't it? The errors will tend to multiply.
We already have people who do not like AI "stealing" their work without payment of some sort of royalty, so there are overt traps being laid for AIs, that will "poison" the understanding they try to gain, for example, deliberate misleading metadata on photos that an AI might use to learn what physical things in the real work look like, so they wrongly learn that a cat-shaped object is a "fox", not a "cat".
http://tinyurl.com/3mtraa3d
But understanding what is "predictable" and what is not is far more important to beekeeping than what tools we use to try and predict an event. The 10-day weather forecast is very "predictable" because the entire world has been sliced up into 1 km squares, and a few supercomputers use complex models to decide if weather in one square is going to move into the adjacent square. You can trust the 10-day forecast with high certainty because it is a SHORT-TERM prediction, made with very robust models on powerful machines.
The further out one goes from the 10-day forecast, the more "chaotic" the results become, and the harder it is to predict. This is generally true of nearly all natural systems - they appear to be simple and orderly processes, and their short-term "behavior" can be predicted with accuracy, but the longer term is nearly impossible to predict. The processes become "chaotic". This is not the "chaos" of Mandelbot Sets and other "fractals" where simple equations produce complex results. This is the more traditional chaos of random stuff happening for no apparent reason.
So, can AI help a beekeeper? Not when a beekeeper wants to know things that simply cannot be predicted over more than a very brief interval. The beekeeper is asking to know the unpredictable.
Also, "random" isn't. Remember coin flips? They might seem random, but they tend more towards landing on the same side they started:
https://arxiv.org/abs/2310.04153
In fact, "Random" is so hard to do right that the only legit random number generator may be this one - https://qrng.anu.edu.au/ and it only SEEMS random because we have not yet worked out some very basic things about how the universe actually works. Even this may all become very simple and predictable at some point.
"Random" is very important because cryptography depends upon random numbers being truly random, so when a method of making random numbers inevitably turns out to not be quite as random as thought, someone can work out how to crack your passwords, and empty your bank account, or play other pranks.
***********************************************
The BEE-L mailing list is powered by L-Soft's renowned
LISTSERV(R) list management software. For more information, go to:
http://www.lsoft.com/LISTSERV-powered.html
|