Sunday, February 12, 2023

Is the Fear of AI Legit?

 By Dr. Akif Khan

ChatGPT is just another Bitcoin with a hell lot of hype and even more instability - only causing two things: either making people extremely excited or giving them screaming abdabs.

Please, don’t get triggered on this blasphemy - let me explain. Yes, many people thought Bitcoin was a farce and apparently lost the opportunity to earn millions of dollars through investing in it on the onset. But remember, there are many who also couldn’t make any fortune but instead lost their lifesavings to Bitcoin. Let me tell you something and you can confirm this with any computer scientist privately (because they’re not gonna accept this publicly) that this has been an ongoing phenomenon in CS research community to make predictive and human-like algorithms from big data (such as Tweets and FB posts), which has been happening for more than a decade now. Big data and data science are not new research fields in computer sciences. For example, millions of posts and Tweets around a certain topic could be accessed by anyone via a simple code to be analysed and studied later. Although, Twitter and Facebook made it difficult now. There are restrictions and paywalls now as far as I know.


To be honest, I don’t know what’s so special about ChatGPT because these things have been around us for a long time now. We have Google search, Google Assistant, SEO, Siri, Alexa, XiaoAi TongXue and loads of other apps and softwares to write down longer sentences and even intelligent graphic tools within softwares like Photoshop, filters and whatnot. True, ChatGPT is unique in its pre-trained transformative text that it learns first and then generates a response. Many would argue that it is not based on search engine but learned data. That is also true. But isn’t everything pre-trained in one sense or another? There is another novelty of remembering the previous questions during a unique chat session unlike Siri or Google Assistant, where you ask them who is Newton and if you ask next where was he born, they would not know who is the “he” in the question. Sometimes, they do. But is that it? Now all these tools and algorithms are gathered at one place only to be given fancy names and “AI” tag (essentially a rebranding of existing packages), just like Tesla and SpaceX, to produce longer texts and pretty images, only to discover a lot of problems, instability and implementation issues later. 

Moreover, you can’t just ask these AI softwares anything only to get a perfect result. An ordinary user will definitely wonder that they’re not using the tool properly when they don’t get desired results. The beautiful images and interesting essays you see going viral are a result of cleverly crafted keywords. It’s just like using Google. You will always find a colleague in the office who can help you find your desired results from Google search. Even when everyone has Google search at their fingertips but not “the skill” to use it. 

You must also be wondering why and how a natural/physical scientist/engineer is able to comment on ChatGPT.  Humbly, yours truly has been into computers and coding for around 25 years now, having started using computers from 286/386, Windows 3.1, DOS, GWBasic, Fortran, Dbase and C and then seeing the whole evolution of coding, big data, computers and internet from Netscape, Yahoo/MSN messengers, Orkut and mIRC, birth of FB/Twitter to Snapchat and “TitTok.” We still use and analyse big data and codes as complex as FT, FEFFIT, WT, DFT, ab-initio, VASP etc. for our research projects related to chemistry, physics and materials engineering. 

Having said that, the point that I am trying to emphasise is that the fear of AI is completely unfounded. What we see around us is mostly off-the-shelf and custom AIs, which have already been around for a while. I don’t remember where but I heard Yuval Noah Harari making an interesting point that if these AI softwares enhance our intelligence, they also multiply our stupidity by equal proportions. Although, he himself has presented AI as something too futuristic and perfect at other places.

Some would argue that AIs have started passing difficult tests and beating grandmasters in chess. But look at it this way. Any smart person with an access to internet can pass any test today. Ever tried copying and pasting a riddle in Google search bar to find the answer? True, ChatGPT is pre-trained and is not working with research engines at the backend but the data it has been fed is the same that search engines use. 

Secondly, beating grandmasters in chess, a game based on chances and calculations, with skyrocketing internet and processing speeds, is great but not too impressive. IBM’s Deep Blue defeated a grandmaster in 1996 with a processing power of hardly around 3.8 GHz (and around 11 GFlops), less than a core 2 duo i3 system. With currently available computers, you are talking at least thousands of times more processing power. Throw in a GPU and imagine. The smartphones you’re holding in your hands are more powerful than Deep Blue, which could evaluate 200 million positions per second. Today’s iPhone 13 has a processing power of ~16 Teraflops (1 TFLOP = 1000 GFLOPs). Imagine how much processing powers desktop, gaming PCs, mining systems, mainframes and supercomputers hold right now. Yet, ask ChatGPT a dozen questions and you will only get 3 or 4 answers to your satisfaction.

Human minds and feelings are not very much predictable. They are more complex and dependent on too many different kinds of variables, many unknown. Human emotions and feelings are not analytical and rational. They don’t have patterns. Yes, you can manipulate them with the help of advertisements and traditional & social media and can predict and redirect shopping behaviours, dating preferences, porn choices and fashion trends but that’s about it. The cultural intricacies, religious beliefs, relationship dynamics, love and affection to their near ones and other aspects of social sphere such as poverty, politics, voting behaviours, tolerance, gender dynamics, cultural context etc. are kind of no-go areas for such analytically predictive and calculating algorithms atop the fact that most of our computer scientist and AI-enthusiasts and geeks don’t even understand or love to talk about these issues, let alone consider them. 

For example, can AIs accurately predict the stock market? To an extent, maybe, but not perfectly or accurately despite such a huge computer processing power at our disposal. Why? In any Sci-Fi movie with a time machine in it, what does the antagonists do on the onset? They go back in time and invest money in the companies like IBM, Facebook or Tesla, they know would be paying them back in millions, right? So why not (or can’t) we do the same? It is only because that AI/computers can only predict human element to an extent due to the unpredictable and complex nature of it. They could, maybe, in the future, know all the variables, dynamics and complexities of human behaviour and somehow all the data and computer scientists, neuroscientists, materials engineers and geneticists could work hands in hands to build fortune-teller AIs in the future but not now. And who is to say that those AIs sans human element won’t fail like the first matrix? Who is to say those AIs won’t turn into dreaming Sonny, evil VIKI (iRobot) or strategist Dolores (Westworld) with even more complex emotions? 


This is also human nature that we start thinking about worst-case scenarios and start panicking about new things instead of sitting back, asking questions from experts and realising the historicity of events. We tend to love sensationalism. Maybe there should be an AI studying this behaviour as well. 

A concern has heen raised by academics is that students might use ChatGPT to write essays and assignments. Apparently, ChatGPT passed difficult university exams. First off, there are now multiple tools that can detect the ChatGPT written content based on the same algorithms. Secondly, the university tests are mostly analytical and their answers are objective. The objectivity vs subjectivity is the key here. 

Don’t worry. If AI turned out to be Skynet at the end, some some white protagonist, the saviour of humanity, will come and crush it by finding some flaw in it.

And no, this blurb is not written by ChatGPT. 

No comments: