Is Artificial Intelegence Dangerous we asked it!

We Asked AI If It Thought AI Was Dangerous?

Read Time:4 Minute, 41 Second

Here is what it said!

OpenAI, a company backed by Elon Musk, is a nonprofit that aims to find ways to create artificial general intelligence. OpenAI has decided not to publish an AI system that can generate news and fiction because it could be dangerous in the wrong hands. 

The OpenAI research laboratory released the full version of the text created by an AI system, which experts have warned could be used for malicious purposes. OpenAI announced in February this year the system, GPT-2, but withheld the full text for fear of spreading fake news, spam, and disinformation. The company has since released smaller, more complex versions of the system to investigate its reception. 

The non-profit research company OpenAI has revealed that its AI model for the generation of text is so good that it is dangerous to reveal it to the public. It is called GPT2, and the danger lies in the possibility of generating fake news. 

A text-generating software has been developed that uses natural language processing (NLP) to produce meaningful text that tells a story. Technology company OpenAI has developed a text generator that many claims is as good as a human writer. The software works by using introduced text sentences to predict where the words will come from. 

The system, known as GPT 2, was trained using a record of eight million websites. GPT 3, the successor to the GPT-2 system (currently frozen because of concerns about its unethical use in creating fake news), uses unattended machine learning and has been used to process 4.5 TB of text.

The 4.5 TB text contains billions of verbal usage patterns originating from the Internet. As part of his training GPT-3, a successor to GPT-2 after it was temporarily frozen over concerns about its unethical use to create fake news, was fed an endless supply of phrases from social media entries, literary works, cooking recipes, excerpts from business emails, programming manuals, press articles, news reports, philosophical essays, poems, research, and scientific reports. 

By feeding him a sample of the content, he could extract a few words or a few pages from everything that was written and find coherent and plausible passages that corresponded to the theme and style of the source material. According to the researchers, the GPT-2 system was able to simulate the style of classic fiction and news stories based on what was fed. The quality of the output was impressive, and it lacked the errors and errors common in previous forecasting attempts, but the real novelty of the system was the wide range of content it could create, turning it into a host of potential applications. 

While the new artificial intelligence system is good at composing a text, the researchers behind it say they won’t release it for fear it could be misused. Its creators keep intelligent systems, trained to mimic natural human language, open to public use for fear of malicious applications of technology. 

While the technology is useful for a variety of everyday purposes, such as helping writers make razor-sharp copies and improving voice assistants and smart speakers, it could also be used for dangerous purposes, such as the creation of false or true-sounding messages or posts on social media. The new artificial intelligence system, developed by the nonprofit AI research firm OpenAI whose investors include CEO Elon Musk and Microsoft, can write page-long responses to requests, imitate fantasy prose, fake celebrity news, and even write homework. 

The creators of a revolutionary AI system for writing messages and fictional works, called Deepfake Text, have taken the unusual step of refusing to release their research for fear of possible abuse. The new text generator powered by artificial intelligence is so good that its creators decided not to make it publicly accessible. Instead, the company has published a technical paper on a smaller AI model that is a less powerful version of the same text generator used by other researchers.

OpenAI, a nonprofit research firm backed by Elon Musk, Reid Hoffman, Sam Altman, and others says its new AI model called GPT2 is good enough so that the risk of malicious use is too high to break with its normal practice of making full research results available to the public to allow more time to discuss the impact of technological breakthroughs.

An AI system is given text, either a few words or a full page and asked to write the next few sentences based on its predictions of what is to come.

These systems push the boundaries of what is considered possible in terms of the quality of production and the variety of possible uses. OpenAI, Google, and others do not have a monopoly over large language models, note researchers, and a forum they and a handful of universities held last year discussed ethical and societal challenges associated with implementing these models. For researchers, the idea of addressing harmful distortions in language models and inoculating them with common sense, causal reasoning, and moral judgments that many would like is an enormous research challenge.

In 2017, researchers invented a time-saving mathematical technique called transformers that makes it possible to perform training in parallel across many processors. When language models are trained to predict empty words from the text, they can be seen adjusting the strength of connections between their layers of computer elements (neurons) to reduce predictive errors. The following year, Google launched a large transformer model called BERT, which led to an explosion of other models using the technique. 

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %
Previous post How Would An AI Writer Advise You To Spot An AI Writer?
Bitcoin Futures ETF Next post Bitcoin Futures ETF