Open AI’s new Project Q* poses potential threats as indicated by an unnamed source through a Reuters report. Q*. which is an Artificial General Intelligence technology(AGI) is reportedly said to have dangerous consequences. AIs like Chat GPT are weak ones that do not really pose the big risk that a strong one like Q* which is in development is said to pose. Q* is considered a top secret project which was recently reported to have had a breakthrough.
Furthermore, the drama around the CEO status at Open AI that was witnessed last week is claimed by sources to be linked to the differences in ideology regarding the dangers of artificial intelligence.
Open AI essentially began as a non-profit company whose mission was to create artificial intelligence that would benefit humanity. Later, Open AI LP or a “capped profit” company was set up under the parent company to deal with the commercial side.
Although the real reason regarding Altman’s ousting was never openly discussed, there have been reports that there were differences in ideology between the commercial and Non profit wings of the company.
Reuters mentioned that some scientists wrote a letter expressing worries about the potential influence of Q* to the nonprofit board that removed Altman. However, a source familiar with the board’s thoughts denies this claim. Speculation about Q* increased during the Thanksgiving weekend, fueled in part by its mysterious name, creating a fearful reputation for a project with limited information available. Altman seemed to acknowledge the project’s existence in an interview with the Verge, stating, “No particular comment on that unfortunate leak.”
There has been an obvious shift towards hyper-commercialization at the company ever since the launch of Chat GPT. The balance that was so far maintained between the commercial and nonprofit side was losing control. Ilya Sutskever at Open AI fears that someday artificial Intelligence could treat humans the way we treat animals and has been skeptical regarding AGIs.
An artificial intelligence as powerful as Q* poses significant risks in the wrong hands, potentially causing catastrophic consequences for humanity. Even with good intentions, Q*’s complex reasoning and decision-making may lead to harmful outcomes, underscoring the importance of carefully evaluating its applications.
The risks around AI
The advanced cognitive abilities promised by the new model introduce uncertainty. Open AI scientists claim that the AGI can think and reason like humans, but this also means there are many aspects of the model that we cannot predict or understand. The more we do not know about it, the more challenging it becomes to anticipate or address potential issues and control its behavior.
The fast-paced changes in technology can surpass people’s ability to adapt, leading to entire generations struggling to acquire the skills needed for adjustment, potentially resulting in job losses. However, merely providing training may not be the straightforward solution. Throughout history, some individuals progress alongside technology, while others are left to navigate challenges independently.
We have been watching movies where the machine takes over Man for a long time now. This is starting to sound less like fiction and more like reality with the dawn of the new tech era. Mission Impossible’s Dead Reckoning already gave us a glimpse into the world where an artificial intelligence grows to unthinkable power.
The scientists at the company should revisit these narratives to glean insights and better prepare for the future. While some may trust that scientists can keep artificial intelligence under control, the possibility of machines going rogue is a concern that cannot be dismissed. Vigilance and preparedness are essential as we navigate the era of artificial intelligence models capable of human-like thinking and reasoning.
Comments 2