close
close

Gottagopestcontrol

Trusted News & Timely Insights

AI is developing faster than experts imagined, including Bill Gates
Tennessee

AI is developing faster than experts imagined, including Bill Gates

When Bill Gates – the man who co-founded Microsoft and developed the software that helped transform personal computers into everyday devices – calls artificial intelligence “the greatest technological advancement of my lifetime,” it’s hard not to stop and say “wow.”

AI Atlas Art Badge Day AI Atlas Art Badge Day

Gates shared his thoughts on how important he thinks generative AI systems will become in an ABC-TV interview with Oprah Winfrey earlier this month. He said the technology will improve countless aspects of society. For example, it will impact health care by acting as a “third person” at your doctor’s appointments, offering you real-time translations and summaries of what the medical professional is saying. And it will become an educational assistant, able to provide each student with a personal tutor “who is always available.”

But the comment from Gates that really caught my attention was how quickly the AI ​​tools introduced to the world nearly two years ago with OpenAI’s release of ChatGPT have evolved.

“This is the first technology that is moving faster than even the insiders expected,” Gates told Winfrey. Despite all the good things AI could bring, he added, “I have significant concerns about the risks.”

He is not alone. Former Google CEO Eric Schmidt made similar comments last year, noting that “people cannot adapt to a world with artificial intelligence.”

Gates, for his part, believes that given the pace of development, companies need to work with governments to create regulations that will, among other things, ensure that AI does not undermine our economy. (The United Nations also laid out its thoughts on AI governance last week in a new report titled “Governing AI for Humanity.”)

Gates is not the only well-known tech expert who believes government regulation will be necessary to mitigate the risks of rapidly evolving systems. OpenAI CEO Sam Altman, who spoke with Winfrey on the same special, also noted that there has been “a pretty steep rate of improvement” in AI systems. His suggestion is that AI makers need to work with the government “to figure out how to do safety testing on these systems… like we do for airplanes or new drugs or things like that.”

Once that is done, Altman said, “it will be easier for us to set the regulatory framework later.”

Given the adage that history is doomed to repeat itself—new technologies are introduced (like social media); the government tries desperately to figure out how to regulate them after they’ve done harm—I wonder if the conversations Altman Winfrey says she’s now having with people in government “every few days” should have been happening in the first place. before the launch of ChatGPT.

Here are the other developments in AI that deserve your attention.

Oprah talks about AI but misses chance with OpenAI’s Altman

Speaking of the Winfrey special, titled “AI and the Future of Us,” now streaming on Hulu, last week I said I would write a review, and aside from Altman and Gates’ aforementioned insights, I’ll just say that I was disappointed by what Winfrey didn’t ask Altman.

In particular, when or if it will reveal details about its popular chatbot’s training data. Why do we want to know? In part because OpenAI and one of its funders, Microsoft, are being sued by the New York Times for allegedly scraping the NYT’s content library without permission, attribution, or compensation to train the Large Language Model (LLM) that powers ChatGPT.

Lawyers and legal scholars call the lawsuit the “first major test for AI in the area of ​​copyright law.”

Although OpenAI has not said what is included in its training data, the company argues that any copyrighted content it copied from The New York Times and other content creators to create its for-profit chatbot would fall under the doctrine of fair use.

I don’t know who will prevail in the case, but considering that Winfrey is one of the most influential content creators in the world, and that well-known authors, artists and publishers have raised concerns and filed lawsuits claiming that their intellectual property is being stolen by AI companies as training data, you’d think she could have asked Altman something about it.

I guess we’ll just have to wait for the next special.

The “Godmother of AI” will help you build new worlds

If you regularly follow news about AI, you’ll hear about the godfathers of AI – computer scientists Yoshua Bengio, Geoffrey Hinton and Yann LeCun, who made headlines with their thoughts on the risks, opportunities and pace of AI development. Last week, it was Fei-Fei Li’s turn to make headlines. Li, an AI researcher and professor at Stanford University who has worked at Google and Stanford, is considered the godmother of AI. And she has launched a new AI company, World Labs, after raising $230 million.

World Labs writes that it is developing LL.M. programs with a focus on “spatial intelligence” that will enable students to “perceive, generate and interact with the 3D world.”

What does this mean? Longtime tech reporter Steven Levy wrote in Wired that the goal of World Labs is to teach “AI systems deep knowledge of physical reality” so that artists, designers, game developers, film studios and engineers who use these AI engines can all be “world builders.”

World Labs’ first product is expected in 2025 – another sign of how quickly AI is developing. There is great optimism in Li’s abilities, her startup is already valued at over a billion dollars.

How much electricity and water does an AI need to write a short email?

We know that computers come at a cost to the environment. There is a cost to powering and cooling the server farms that house the processors, software, computers, networking devices and other technology that provide us with the Internet and online services every day.

So what is the environmental cost of a chatbot query? The Washington Post went to research researchers at the University of California, Riverside. They found that a single 100-word email created by a chatbot using OpenAI’s GPT-4 model, which powers ChatGPT, requires 519 milliliters of water – just over a bottle full. The same email uses 0.14 kilowatt hours of electricity, or “14 LED lightbulbs for 1 hour.”

It’s worth reading their study to see what these costs total, considering that, according to the Pew Research Center, about a quarter of all Americans have used ChatGPT since its launch.

Public libraries can help combat AI-generated disinformation

The Urban Libraries Council has published a helpful summary outlining how public libraries can use their role as community spaces to encourage people to meet in person. This is not only to help libraries overcome feelings of social isolation in an increasingly digital world, but also to offer people tools and workshops to teach them how to spot misinformation and disinformation spread by digital platforms.

Registration information for the AI ​​Atlas newsletter Registration information for the AI ​​Atlas newsletter

“Several studies suggest that misinformation and disinformation are more likely to thrive in societies that are either highly polarized or in communities with low social connectedness,” the council wrote in the 10-page letter titled “The Role of Libraries as Public Spaces in Combating Misinformation, Disinformation, and Social Isolation in the Age of Generative AI.”

Among library programs that have already been successful, the council highlighted the Boston Public Library for hosting a workshop in August aimed at countering misinformation by teaching digital literacy skills and offering tools to help people “identify true information on the internet.”

For your information, according to the American Library Association, there are more than 123,000 public libraries in the United States.

My lessons in AI vocabulary

Subscribers to the newsletter version of this column will get additional insight from me each week in the form of AI vocabulary they should know (you can sign up for all things AI at CNET’s AI Atlas consumer hub).

If you just want some quick refreshers, I’ve also started creating short TikTok vocabulary lessons. The lesson on AI, chatbots, and LLMs can be found here. And a super quick summary on hallucinations and training data can be found here.

The videos were and are entirely created and presented by a human.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *