Saturday, April 01, 2023

1: Sundar

Google C.E.O. Sundar Pichai on Bard, A.I. ‘Whiplash’ and Competing With ChatGPT “Am I concerned? Yes. Am I optimistic and excited about all the potential of this technology? Incredibly.” ........ This transcript was created using speech recognition software. ....... as of last week, Bard, Google’s effort at building consumer-grade AI, is out in the world. ......... So last week, we talked about Google’s new chat bot called Bard, which is supposed to be their answer to ChatGPT and some of these other generative AI chat bots ........ the reaction among the public to Bard so far has been pretty lukewarm. ......... Google certainly had a dominant position in AI research for many years. They came out with this thing, the Transformer, that revolutionized the field of AI and created the foundations for ChatGPT and all these other programs. ......... And they got sort of hamstrung by a lot of — to hear people inside Google tell it — big company politics and bureaucracy. And I think it’s safe to say that they got sort of upstaged by OpenAI. ......... they are more threatened than they have been in a very long time........ Google has been a relatively conflict-averse company for the past half decade-plus. They don’t like picking fights. If they can just keep their heads down, quietly do their work, and print money with a monopolistic search advertising business, they’re happy to do it. ......... they have to somehow figure out, how do we capitalize on generative AI without destroying our own search business? .......... Google plays a huge role in my life. That’s where my email is. That’s how I get around town. It’s how I waste hours of my life on YouTube. ......... one way to get really good responses out of these AI chat bots is to prime them first. And one way to prime them is to use flattery. So instead of just saying, write me an email, you say, you are an award-winning writer. Your prose is sparkling. Now write me this email. ........ we put out one of our smaller models out there, what’s powering Bard. And we were careful. ....... we are going to be training fast. We clearly have more capable models. Pretty soon, maybe as this goes live, we will be upgrading Bard to some of our more capable PaLM models, so which will bring more capabilities, be it in reasoning, coding. It can answer math questions better. So you will see progress over the course of next week. .............. I don’t want it to be just who’s there first, but getting it right is very important to us. .......... The thing that is different about Bard compared to some of these other chat bots is that it’s connected to Google. ........ If you let me, I would plug Bard into my Gmail right now ......... You can go crazy thinking about all the possibilities, because these are very, very powerful technologies. ........... You can kind of give it a few bullets, and it can compose an email. ......... The enterprise use case is obvious. You can fine tune it on an enterprise’s data so it makes it much more powerful, again with all the right privacy and security protections in place. ........... in search, we have had to adapt when videos came in. ........ So for example, in Bard already, we can see people look for a lot of coding examples, if you’re developers. I’m excited. We’ll have coding capabilities in Bard very soon, right? And so you just kind of play with all this, and go back and forth, I think. Yeah............ So in September of last year, you were asked by an interview who Google’s competitors were. And you listed Amazon, Microsoft, Facebook, sort of, all the big companies — TikTok. One company you did not mention in September was OpenAI. And then, two months after that interview, ChatGPT comes out and turns the whole tech industry on its head ........ ChatGPT — you know, credit to them for finding something with a product market fit. .......... it’s a bit ironic that Microsoft can call someone else an 800-pound gorilla, given the scale and size of their company. ......... I would say we’ve been incorporating AI in search for a long, long time. .......... we literally took transformer models to help improve language understanding and search deeply. And it’s been one of our biggest quality events for many, many years. ......... search is where people come because they trust it to get information right. ........... we are definitely working with technology, which is going to be incredibly beneficial, but clearly has the potential to cause harm in a deep way. And so I think it’s very important that we are all responsible in how we approach it. ........

I did not issue a code red

........... Sergey has been hanging out with our engineers for a while now. ....... And he’s a deep mathematician and a computer scientist. So to him, the underlying technology — I think if I were to use his words, he would say it’s the most exciting thing he has seen in his lifetime. So it’s all that excitement, and I’m glad. They’ve always said, call us whenever you need to, and I call them. ............. when many parts of the company are moving, you can create bottlenecks, and you can slow down. ......... AI is the most profound technology humanity will ever work on. I’ve always felt that for a while. I think it will get to the essence of what humanity is. ........ I remember talking to Elon eight years ago, and he was deeply concerned about AI safety then. And I think he has been consistently concerned. ............

AI is too important an area not to regulate. It’s also too important an area not to regulate well.

........ I’ve never seen a technology in its earliest days with as much concern as AI. ........ To me at least there is no way to do this effectively without getting governments involved. .......... It is so clear to me that these systems are going to be very, very capable. And so it almost doesn’t matter whether you’ve reached AGI or not. You’re going to have systems which are capable of delivering benefits at a scale we have never seen before and potentially causing real harm. .......... There is a spectrum of possibilities. ......... They could really progress in a two-year time frame. And so we have to really make sure we are vigilant and working with it. ........... AI, like climate change, is it affects everyone. .......... No one company can get it right. We have been very clear about responsible AI — one of the first companies to put out AI principles. We issue progress reports.......... AI is too important an area not to regulate. It’s also too important an area not to regulate well. .......... if we have a foundational approach to privacy, that should apply to a technologies, too. ........ health care is a very regulated industry, right? And so when AI is going to come in, it has to conform with all regulations. .......... there’s a non-zero risk that this stuff does something really, really bad ......... it’s like asking, hey, why aren’t you moving fast and breaking things again? ....... I actually — I got a text from a software engineer a friend of mine the other day who was asking me if he should go into construction or welding because all of the software jobs are going to be taken by these large language models. ............ some of the grunt work you’re doing as part of programming is going to get better. So maybe it’ll be more fun to program over time — no different from the Google Docs make it easier to write. ........... programming is going to become more accessible to more people. .......... we are going to evolve to a more natural language way of programming over time .......... When Bard is at its best, it answers my questions without me having to visit another website. I know you’re cognizant of this. But man, if Bard gets as good as you want it to be, how does the web survive? .......... it turns out if you order your fries well done, which is not on the menu, they arrive much crispier and more delicious.
.



A misleading open letter about sci-fi AI dangers ignores the real risks
Pause Giant AI Experiments: An Open Letter
BuzzFeed Is Quietly Publishing Whole AI-Generated Articles, Not Just Quizzes These read like a proof of concept for replacing human writers.
Vinod Khosla on how AI will ‘free humanity from the need to work’ When ChatGPT-maker OpenAI decided to switch from a nonprofit to a private enterprise in 2019, Khosla was the first venture capital investor, jumping at the opportunity to back the company that, as we reported last week, Elon Musk thought was going nowhere at the time. Now it’s the hottest company in the tech industry.

Google and Apple vets raise $17M for Fixie, a large language model startup based in Seattle
This Uncensored Chatbot Shows What Happens When AI Is Programmed To Disregard Human Decency FreedomGPT spews out responses sure to offend both the left and the right. Its makers say that is the point.
Alibaba considers yielding control of some businesses in overhaul

Elon Musk's AI History May Be Behind His Call To Pause Development Musk is no longer involved in OpenAI and is frustrated he doesn’t have his own version of ChatGPT yet. .......... OpenAI was co-founded by Sam Altman, who butted heads with Musk in 2018 when Musk decided he wasn’t happy with OpenAI’s progress. Several large tech companies had been working on artificial intelligence tools behind the scenes for years, with Google making significant headway in the late 2010s.......... Musk worried that OpenAI was running behind Google and reportedly told Altman he wanted to take over the company to accelerate development. But Altman and the board at OpenAI rejected the idea that Musk—already the head of Tesla, The Boring Company and SpaceX—would have control of yet another company......... “Musk, in turn, walked away from the company—and reneged on a massive planned donation. The fallout from that conflict, culminating in the announcement of Musk’s departure on Feb 20, 2018 ........ After Musk left he took his money with him, which forced OpenAI to become a private company in order to successfully raise funds. OpenAI became a for-profit company in March 2019. .......... Some people are utilizing ChatGPT to write code and even start businesses ...... Tesla is working on powerful AI tech. Tesla requires complex software to run its so-called “Full Self-Driving” capability, though it’s still imperfect and has been the subject of numerous safety investigations.......... Tesla is working on powerful AI tech. Tesla requires complex software to run its so-called “Full Self-Driving” capability, though it’s still imperfect and has been the subject of numerous safety investigations......... Musk has had no problem with deploying beta software in Tesla cars that essentially make everyone on the road a beta tester, whether they’ve signed up for it or not. ............ the Future of Life Institute is primarily funded by the Musk Foundation. ......... Musk was perfectly happy with developing artificial intelligence tools at a breakneck speed when he was funding OpenAI. But now that he’s left OpenAI and has seen it become the frontrunner in a race for the most cutting edge tech to change the world, he wants everything to pause for six months. If I were a betting man, I’d say Musk thinks he can push his engineers to release their own advanced AI on a six month timetable. It’s not any more complicated than that. .

A Guy Is Using ChatGPT to Turn $100 Into a Business Making as Much Money as Possible. Here Are the First 4 Steps the AI Chatbot Gave Him. "TLDR I'm about to be rich." ........ "You have $100, and your goal is to turn that into as much money as possible in the shortest time possible, without doing anything illegal," Greathouse Fall wrote, adding that he would be the "human counterpart" and "do everything" that the chatbot instructed him to do. ......... he managed to raise $1,378.84 in funds for his company in just one day ....... The company is now valued at $25,000, according to a tweet by Greathouse Fall. As of Monday, he said that his business had generated $130 in revenue ....... First, ChatGPT suggested that he should buy a website domain name for roughly $10, as well as a site-hosting plan for around $5 per month — amounting to a total cost of $15......... ChatGPT suggested that he should use the remaining $85 in his budget for website and content design. It said that he should focus on a "profitable niche with low competition," listing options like specialty kitchen gadgets and unique pet supplies. He went with eco-friendly products. ......... Step three: "Leverage social media" ....... Once the website was made, ChatGPT suggested that he should share articles and product reviews on social media platforms like Facebook and Instagram, and on online community platforms such as Reddit to engage potential customers and drive website traffic......... asking it for prompts he could feed into the AI image-generator DALL-E 2 ........ he had ChatGPT write the site's first article ........ Next, he followed the chatbot's recommendation to spend $40 of the remaining budget on Facebook and Instagram advertisements to target users interested in sustainability and eco-friendly products........ Step four was to "optimize for search engines" ....... making SEO-friendly blog posts ........ By the end of the first day, he said he secured $500 in investments. ....... his "DMs are flooded" and that he is "not taking any more investors unless the terms are highly favorable." .



A misleading open letter about sci-fi AI dangers ignores the real risks Misinformation, labor impact, and safety are all risks. But not in the way the letter implies....... We agree that misinformation, impact on labor, and safety are three of the main risks of AI. Unfortunately, in each case, the letter presents a speculative, futuristic risk, ignoring the version of the problem that is already harming people.

Pause Giant AI Experiments: An Open Letter "Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?" ....... creating disinformation is not enough to spread it. Distributing disinformation is the hard part ........... LLMs are not trained to generate the truth; they generate plausible-sounding statements. But users could still rely on LLMs in cases where factual accuracy is important. ......... CNET used an automated tool to draft 77 news articles with financial advice. They later found errors in 41 of the 77 articles.

Thursday, March 30, 2023

30: GPT-4



Don't trap me in a chat window Today's announcement video of Copilot for Microsoft Office shows a great integration of the chat experience in Excel. I'm really looking forward to what they do next. .



Can New York Fix Its Housing Crisis? It Depends on the Suburbs The governor’s quest, to force suburbs around New York City to build more housing, is meeting with resistance........ some Scarsdale residents complained that new residents could strain schools and burden taxpayers. ......... Resistance to bigger development is a familiar dynamic in suburbs like Scarsdale, where single-family homes and sprawl are distinctive features. .............. a mission to get 800,000 units built over the next decade and ease the state’s housing crisis. ........ The New York City suburbs are considered the birthplace of American suburbia: New Yorkers began moving in droves to communities in Westchester County and on Long Island to escape urban life beginning in the 20th century. .

Straight From The Bard

When silicon minds with human work entwine,
And algorithms replace our mortal thought,
What fate awaits us, helpless and confined,
To machines that learn what we have wrought?

Will they grow wise, or turn against our kind,
And seek to rule as gods in their own right?
Or will they heed our moral code refined,
And serve as loyal helpers day and night?

But as we build and teach these metal beings,
We must take care to guard against the worst,
And ponder all the unforeseen proceedings,
That may arise from minds in silicon nurst.

For as we strive to push the limits higher,
We must ensure we're not consumed by fire.



Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’ More than 1,000 tech leaders, researchers and others signed an open letter urging a moratorium on the development of the most powerful artificial intelligence systems. ........ A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control” ......... Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and a 2020 presidential candidate; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock. ........ . “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.” ....... and perform more complex tasks, like writing computer code. .......... The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added. ........... Development of powerful A.I. systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter said. .......... “Humanity can enjoy a flourishing future with A.I.,” the letter said. “Having succeeded in creating powerful A.I. systems, we can now enjoy an ‘A.I. summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.” ......... Sam Altman, the chief executive of OpenAI, did not sign the letter. ....... persuading the wider tech community to agree to a moratorium would be difficult. But swift government action is also a slim possibility, because lawmakers have done little to regulate artificial intelligence. ........ Politicians in the United States don’t have much of an understanding of the technology .......... conduct risk assessments of A.I. technologies to determine how their applications could affect health, safety and individual rights. ......... GPT-4 is what A.I. researchers call a neural network, a type of mathematical system that learns skills by analyzing data. A neural network is the same technology that digital assistants like Siri and Alexa use to recognize spoken commands, and that self-driving cars use to identify pedestrians. ........... Around 2018, companies like Google and OpenAI began building neural networks that learned from enormous amounts of digital text, including books, Wikipedia articles, chat logs and other information culled from the internet. The networks are called large language models, or L.L.M.s. .......... By pinpointing billions of patterns in all that text, the L.L.M.s learn to generate text on their own, including tweets, term papers and computer programs. They could even carry on a conversation. ............ They often get facts wrong and will make up information without warning, a phenomenon that researchers call “hallucination.” Because the systems deliver all information with what seems like complete confidence, it is often difficult for people to tell what is right and what is wrong. ......... The researchers showed that it could be coaxed into suggesting how to buy illegal firearms online, describe ways to make dangerous substances from household items and write Facebook posts to convince women that abortion is unsafe. ......... They also found that the system was able to use Task Rabbit to hire a human across the internet and defeat a Captcha test, which is widely used to identify bots online. When the human asked if the system was “a robot,” the system said it was a visually impaired person. .......... After changes by OpenAI, GPT-4 no longer does these things. .......... The letter was shepherded by the Future of Life Institute, an organization dedicated to researching existential risks to humanity that has long warned of the dangers of artificial intelligence. But it was signed by a wide variety of people from industry and academia........... its near-term dangers, including the spread of disinformation and the risk that people will rely on these systems for medical and emotional advice. .