Showing posts with label Sam Altman. Show all posts
Showing posts with label Sam Altman. Show all posts

Sunday, January 05, 2025

5: Sam Altman



We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history. ......... These years have been the most rewarding, fun, best, interesting, exhausting, stressful, and—particularly for the last two—unpleasant years of my life so far. ......... Getting fired in public with no warning kicked off a really crazy few hours, and a pretty crazy few days. The “fog of war” was the strangest part. None of us were able to get satisfactory answers about what had happened, or why. ........ I appreciate the way so many people worked together to build a stronger system of governance for OpenAI that enables us to pursue our mission of ensuring that AGI benefits all of humanity. ............ The last two years have been like a decade at a normal company. When any company grows and evolves so fast, interests naturally diverge. And when any company in an important industry is in the lead, lots of people attack it for all sorts of reasons, especially when they are trying to compete with it. .......... when we started we had no idea we would have to build a product company; we thought we were just going to do great research. ......... We also had no idea we would need such a crazy amount of capital. There are new things we have to go build now that we didn’t understand a few years ago, and there will be new things in the future we can barely imagine now. ............ We believe in the importance of being world leaders on safety and alignment research, and in guiding that research with feedback from real world applications. ........... We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes. ............. We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else.

Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

....... Ron Conway and Brian Chesky went so far above and beyond the call of duty that I’m not even sure how to describe it. I’ve of course heard stories about Ron’s ability and tenaciousness for years and I’ve spent a lot of time with Brian over the past couple of years getting a huge amount of help and advice. .......... They used their vast networks for everything needed and were able to navigate many complex situations. .......... I thought I knew what it looked like to support a founder and a company, and in some small sense I did. But I have never before seen, or even heard of, anything like what these guys did, and now I get more fully why they have the legendary status they do. They are different and both fully deserve their genuinely unique reputations, but they are similar in their remarkable ability to move mountains and help, and in their unwavering commitment in times of need. ............ I look forward to paying it forward.


Sam Altman Interview On Nov. 30, 2022, traffic to OpenAI’s website peaked at a number a little north of zero. It was a startup so small and sleepy that the owners didn’t bother tracking their web traffic. It was a quiet day, the last the company would ever know. Within two months, OpenAI was being pounded by more than 100 million visitors trying, and freaking out about, ChatGPT. .......... his relentless pursuit of artificial general intelligence—the still-theoretical next phase of AI, in which machines will be capable of performing any intellectual task a human can do. ........... Conservatively, I would say there were 20 founding dinners that year [2015], and then one ends up being entered into the canon, and everyone talks about that. The most important one to me personally was Ilya 1 and I at the Counter in Mountain View [California]. Just the two of us. ........... 2012 comes along. Ilya and others do AlexNet. 2 I keep watching the progress, and I’m like, “Man, deep learning seems real. Also, it seems like it scales. That’s a big, big deal. Someone should do something.” ............. It’s impossible to overstate how nonmainstream AGI was in 2014. People were afraid to talk to me, because I was saying I wanted to start an AGI effort. It was, like, cancelable. It could ruin your career. But a lot of people said there’s one person you really gotta talk to, and that was Ilya. So I stalked Ilya at a conference, got him in the hallway, and we talked. .............. The pitch was just come build AGI. ........ I cannot overstate how heretical it was at the time to say we’re gonna build AGI. So you filter out 99% of the world, and you only get the really talented, original thinkers. .......... if you’re building, like, the 10,000th photo-sharing app? Really hard to recruit talent. ........... Convince me no one else is doing it, and appeal to a small, really talented set? You can get them all. And they all wanna work together. So we had what at the time sounded like an audacious or maybe outlandish pitch, and it pushed away all of the senior experts in the field, and we got the ragtag, young, talented people who are good to start with. .............. People used to joke in those days that the only thing I would do was walk into a meeting and say, “Scale it up!” Which is not true, but that was kind of the thrust of that time period. ........... The rest of the company was like, “Why are you making us launch this? It’s a bad decision. It’s not ready.” I don't make a lot of “we’re gonna do this thing” decisions, but this was one of them. ............... And that started off a mad scramble to get a lot of compute 7—which we did not have at the time—because we had launched this with no business model or thoughts for a business model. I remember a meeting that December where I sort of said, “I’ll consider any idea for how we’re going to pay for this, but we can’t go on.” And there were some truly horrible ideas—and no good ones. So we just said, “Fine, we’re just gonna try a subscription, and we’ll figure it out later.” That just stuck. We launched with GPT-3.5, and we knew we had GPT-4 [coming] ............... It’s very unusual to have been a VC first and have had a pretty long VC career and then run a company. .............. And I knew I was both overwhelmed with gratitude and, like, “F---, I’m gonna get strapped to a rocket ship, and my life is gonna be totally different and not that fun.” I had a lot of gallows humor about it. My husband 8 tells funny stories from that period of how I would come home, and he’d be like, “This is so great!” And I was like, “This is just really bad. It’s bad for you, too. You just don’t realize it yet, but it’s really bad.” ................. It complicated my ability to live my life. But in the company, you can be a well-known CEO or not, people are just like, “Where’s my f---ing GPUs?” .............. come with me to the research meeting right after this, and you will see nothing but disrespect. Which is great. .............. that year was such an insane blur, from November of 2022 to November of 2023, I barely remember it. It literally felt like we built out an entire company from almost scratch in 12 months, and we did it in crazy public. One of my learnings, looking back, is everybody says they’re not going to screw up the relative ranking of important versus urgent, 9 and everybody gets tricked by urgent. So I would say the first moment when I was coldly staring at reality in the face—that this was not going to work—was about 12:05 p.m. on whatever that Friday afternoon was. ................ so they fired me at noon on a Friday. A bunch of other people quit Friday night. By late Friday night I was like, “We’re just going to go start a new AGI effort.” Later Friday night, some of the executive team was like, “Um, we think we might get this undone. Chill out, just wait.” .................. Saturday morning, two of the board members called and wanted to talk about me coming back. I was initially just supermad and said no. And then I was like, “OK, fine.” I really care about [OpenAI]. But I was like, “Only if the whole board quits.” I wish I had taken a different tack than that, but at the time it felt like a just thing to ask for. ............. There was this whole thing of, like, “Sam didn’t even tell the board that he was gonna launch ChatGPT.” ......... But what is true is I definitely was not like, “We’re gonna launch this thing that is gonna be a huge deal.” ............. It’s a crazy year, right? It’s a company that’s moving a million miles an hour in a lot of different ways. ............ But then very quickly it was over, and I had a complete mess on my hands. And it got worse every day. It was like another government investigation, another old board member leaking fake news to the press. And all those people that I feel like really f---ed me and f---ed the company were gone, and now I had to clean up their mess. .............. Once everything was cleared up, it was all fine, but in the first few days no one knew anything. And so I’d be walking down the hall, and [people] would avert their eyes. It was like I had a terminal cancer diagnosis. There was sympathy, empathy, but [no one] was sure what to say. ................ we do a three-hour executive team meeting on Mondays ............. yesterday and today, six one-on-ones with engineers. I’m going to the research meeting right after this. Tomorrow is a day where there’s a couple of big partnership meetings and a lot of compute meetings. .............. There’s five meetings on building up compute. I have three product brainstorm meetings tomorrow, and I’ve got a big dinner with a major hardware partner after. .......... A few things that are weekly rhythms, and then it’s mostly whatever comes up. ............ I’m not a big inspirational email writer, but lots of one-on-one, small-group meetings and then a lot of stuff over Slack. .............. I’m a big Slack user. You can get a lot of data in the muck. I mean, there’s nothing that’s as good as being in a meeting with a small research team for depth. But for breadth, man, you can get a lot that way. ............ You’ve put research in a different building from the rest of the company, a couple of miles away. .............. Research will still have its own area. Protecting the core of research is really critical to what we do. .............. Usually you get a very good product company and a very bad research lab. We’re very fortunate that the little product company we bolted on is the fastest-growing tech company maybe ever—certainly in a long time. But that could easily subsume the magic of research, and I do not intend to let that happen. .........................

when an AI system can do what very skilled humans in important jobs can do—I’d call that AGI.

.......... Can it start as a computer program and decide it wants to become a doctor? Can it do what the best people in the field can do or the 98th percentile? How autonomous is it? I don’t have deep, precise answers there yet, but if you could hire an AI as a remote employee to be a great software engineer, I think a lot of people would say, “OK, that’s AGI-ish.” .................... when I think about superintelligence, the key thing to me is, can this system rapidly increase the rate of scientific discovery that happens on planet Earth? ................. it was clear people were trying to use ChatGPT for search a lot, and that actually wasn’t something that we had in mind when we first launched it. ....................... since we’ve launched search in ChatGPT, I almost don’t use Google anymore. ........ Many people who work at OpenAI get really heartwarming emails when people are like, “I was sick for years, no doctor told me what I had. I finally put all my symptoms and test results into ChatGPT—it said I had this rare disease. I went to a doctor, and they gave me this thing, and I’m totally cured.” ............ Long term, as you think about a system that really just has incredible capability, there’s risks that are probably hard to precisely imagine and model. But I can simultaneously think that these risks are real and also believe that the only way to appropriately address them is to ship product and learn. .................. three potential roadblocks to progress: scaling the models, chip scarcity and energy scarcity .......... I think 2025 will be an incredible year. ............. He’s the president of the United States. I support any president. .......... The question was, will he abuse his political power of being co-president, or whatever he calls himself now, to mess with a business competitor? I don’t think he’ll do that. I genuinely don’t. May turn out to be proven wrong. ........... for all of the stories—people talk about how he berates people and blows up and whatever, I hadn’t experienced that. ............ The thing I really deeply agree with the president on is, it is wild how difficult it has become to build things in the United States. Power plants, data centers, any of that kind of stuff. I understand how bureaucratic cruft builds up, but it’s not helpful to the country in general. It’s particularly not helpful when you think about what needs to happen for the US to lead AI. And the US really needs to lead AI.


Tuesday, May 16, 2023

16: Sam Altman

The C.E.O. of OpenAI Heads to Congress to Discuss Rules for A.I. Sam Altman, who leads ChatGPT’s parent company, is expected to call for some regulation of artificial intelligence as Washington weighs its next steps. ........ ChatGPT, his company’s most notable product, has captured the public’s imagination like no tech product has in years, inspiring hopes and fears about its transformative powers. ....... “The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we’re prepared for.” He’s expected to say in his testimony that “the regulation of A.I. is essential.” ......... Mr. Armstrong, who with the venture capitalist Blake Byers founded NewLimit in 2021 with $100 million of their own money, is the latest tech mogul to be fascinated by longevity. Peter Thiel, OpenAI’s Sam Altman and Oracle’s Larry Ellison are among those who have poured millions into companies researching it. .



OpenAI chief set to call for greater regulation of artificial intelligence Sam Altman, co-founder of start-up behind ChatGPT, to make first appearance before Congress on Tuesday ...... Altman, whose company created AI chatbot ChatGPT, will say that “the regulation of AI is essential” as he testifies for the first time before Congress on Tuesday. ........ According to prepared remarks released before the hearing, Altman will tell the Senate judiciary subcommittee on privacy, technology and the law that he is “eager to help policymakers as they determine how to facilitate regulation that balances incentivising safety while ensuring that people are able to access the technology’s benefits”. ........ Altman’s testimony will recommend that AI companies adhere to an “appropriate set of safety requirements, including internal and external testing prior to release” and licensing or registration conditions for AI models........ safety requirements which “AI companies must meet [should] have a governance regime flexible enough to adapt to new technological developments”....... Earlier this month, AI pioneer Geoffrey Hinton quit Google after a decade at the tech giant in order to speak freely about the risks of the technology, which he warned would amplify societal divides and could be used by bad actors. ....... “Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security” ......... “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls” .......... “This hearing begins our subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology . . . as we explore sensible standards and principles to help us navigate this uncharted territory,” .



AI pioneer Geoffrey Hinton warns about growing risks as he quits Google

Wednesday, April 26, 2023

Can ChatGPT Access The Internet?

As an AI language model, ChatGPT does not have the ability to directly access the internet. However, it has been trained on a vast corpus of text data that has been sourced from various websites and online sources, which allows it to have knowledge on a wide range of topics. When you ask it a question, it uses the information and knowledge it has gained from this training data to generate a response. Additionally, it can provide information on specific websites or online resources if you provide it with the necessary information or URL. However, please keep in mind that it does not have access to real-time information or current events beyond its training data.

Can ChatGPT Draw?

As a language model, ChatGPT does not have the ability to create visual content, including drawings, paintings, or other forms of visual art. However, it can provide descriptions or explanations of visual content, such as describing a painting or explaining how to draw a particular object. Additionally, it can generate text-based diagrams or flowcharts to help illustrate concepts or processes. If you need help with creating visual content, there are many online tools and resources available that can help you create or edit images, such as graphic design software, drawing programs, or photo editors.

Can ChatGPT Do Math?

Yes, ChatGPT is capable of performing various mathematical operations and solving equations. As a language model, ChatGPT has been trained on a vast amount of data, including mathematical concepts and formulas, and can use this knowledge to answer math-related questions and perform calculations.

One way in which ChatGPT can do math is by recognizing mathematical expressions in natural language and converting them into mathematical notation. For example, if a user asks "What is the square root of 25?", ChatGPT can recognize the phrase "square root" and the number "25" and provide the answer "5" using mathematical notation.

In addition to basic arithmetic operations like addition, subtraction, multiplication, and division, ChatGPT can also perform more complex mathematical operations like calculus, statistics, and linear algebra. For example, if a user asks "What is the integral of x^2?", ChatGPT can use its knowledge of calculus to provide the answer "x^3/3 + C", where C is the constant of integration.

ChatGPT can also solve equations and systems of equations. For example, if a user asks "What is the value of x in 2x + 3 = 7?", ChatGPT can use its knowledge of algebra to solve the equation and provide the answer "2". Similarly, if a user asks "Solve the system of equations 2x + y = 5, x - 3y = 2", ChatGPT can use its knowledge of linear algebra to solve the system and provide the answer "x = 1, y = 3".

While ChatGPT is capable of performing mathematical operations and solving equations, it is important to note that it may not always be the best tool for the job. For more complex or specialized mathematical tasks, it may be necessary to use dedicated mathematical software or consult with a math expert.

Furthermore, ChatGPT's performance in math-related tasks may be limited by the accuracy of the input and the complexity of the question. If the user provides incorrect or incomplete information, ChatGPT may not be able to provide an accurate answer. Additionally, if the question requires advanced mathematical knowledge or creative problem-solving, ChatGPT may struggle to provide an accurate or complete solution.

In summary, ChatGPT is capable of performing various mathematical operations and solving equations. Its ability to understand natural language and convert it into mathematical notation allows it to provide quick and accurate answers to math-related questions. However, its performance may be limited by the complexity of the task and the accuracy of the input.

Can Chatbot Be Detected By Turnitin?

As a language model, ChatGPT is not capable of submitting text to Turnitin or any other plagiarism detection software as it does not have the ability to upload or submit documents. However, if text generated by a chatbot is copied and pasted into a document or assignment that is later submitted to Turnitin, it is possible that Turnitin may detect the text as potentially plagiarized, depending on the quality and uniqueness of the generated text and how widely it has been used elsewhere. It is important to note that using chatbots or other AI tools to generate academic work without proper attribution or acknowledgement is considered academic dishonesty and can result in severe consequences, including failing a course or even expulsion from an educational institution.

Can Chatbot Write Code?

Chatbots are designed to engage in conversation with humans, and while they are adept at understanding natural language and generating text-based responses, they are generally not equipped to write code. However, there are some chatbots that have been developed specifically for programming purposes, which are known as coding chatbots.

Coding chatbots are designed to help developers write code more efficiently by providing suggestions, code snippets, and other helpful tools. They use natural language processing (NLP) and machine learning algorithms to understand the intent and context of the developer's request, and then generate relevant code or suggestions based on that understanding.

One way in which coding chatbots can help developers is by providing suggestions for code completion. When a developer is typing out a line of code, the coding chatbot can analyze the syntax and context of the code and offer suggestions for the rest of the line or the entire block of code. This can help speed up the coding process and reduce errors.

Another way in which coding chatbots can be helpful is by providing code snippets for commonly used functions or tasks. For example, a chatbot designed for web development might provide a pre-written code snippet for creating a responsive navigation menu, saving the developer time and effort in writing the code from scratch.

Coding chatbots can also be used to automate repetitive tasks, such as generating boilerplate code for a new project or running tests on code changes. By automating these tasks, developers can focus on more complex and challenging aspects of the project.

One challenge in developing coding chatbots is ensuring that the generated code is accurate, efficient, and follows best practices. This requires a deep understanding of programming languages, algorithms, and software development practices, which can be challenging to incorporate into a chatbot's machine learning algorithms. Additionally, coding chatbots may struggle with more complex programming tasks that require creative problem-solving skills, which are difficult to automate.

Despite these challenges, coding chatbots have the potential to revolutionize the way developers work, by providing intelligent and efficient assistance with the coding process. They can help developers save time, reduce errors, and improve the quality of their code. As machine learning algorithms continue to improve, it is likely that we will see more advanced coding chatbots in the future, which will further enhance the capabilities of developers and accelerate software development.

Can Chatbot Be Detected?

It is possible for humans to detect that they are interacting with a chatbot rather than a human, particularly if the chatbot has limited language capabilities or if the responses are formulaic and lack personalization. However, as language models like ChatGPT continue to improve and incorporate more advanced natural language processing and machine learning algorithms, it may become increasingly difficult to distinguish between human-generated and AI-generated responses. Additionally, some chatbots are designed to intentionally deceive users into thinking they are human, which can make detection even more challenging. Ultimately, the ability to detect a chatbot depends on a combination of factors such as the sophistication of the chatbot's language model, the quality of its responses, and the user's level of experience and familiarity with chatbot technology.

Can ChatGPT Write Essays?

Yes, ChatGPT can write essays. As a large language model trained by OpenAI, based on the GPT-3.5 architecture, ChatGPT has the capability to generate coherent and comprehensive text on a wide range of topics. In fact, one of the key applications of ChatGPT is to generate human-like text that can be used for various purposes, including writing essays.

To write an essay, ChatGPT first needs to be provided with a prompt or topic to write about. This can be done by the user inputting a question, statement, or idea that serves as the starting point for the essay. Once the prompt is provided, ChatGPT uses its natural language processing capabilities and machine learning algorithms to generate a response.

In order to write an effective essay, ChatGPT needs to have a deep understanding of the topic at hand. This requires access to a large amount of knowledge and information on the topic, which can be obtained through various sources, including online databases, academic journals, and other reliable sources. ChatGPT can also use its machine learning capabilities to learn from past essays and other relevant texts, which can help improve the quality and coherence of its responses.

To write a high-quality essay, ChatGPT also needs to have a strong command of language and grammar. This involves understanding the rules of grammar, syntax, and style, as well as the nuances of language use and communication. ChatGPT has been trained on a vast corpus of texts, which includes a wide range of writing styles and genres. This allows it to generate text that is not only grammatically correct but also stylistically appropriate for the context and audience.

In addition to language and grammar, ChatGPT also needs to be able to structure its responses in a logical and coherent manner. This requires understanding the principles of essay structure, including the introduction, body, and conclusion. ChatGPT can use its machine learning capabilities to identify the key points to be addressed in each section of the essay, as well as the most effective ways to link these points together to form a cohesive argument or narrative.

Overall, ChatGPT has the capability to write high-quality essays on a wide range of topics. However, it is important to note that while ChatGPT can generate text that is grammatically correct and stylistically appropriate, it may not always be able to generate text that is accurate or relevant to the topic at hand. As with any automated system, it is important to review and edit the output generated by ChatGPT to ensure its accuracy and appropriateness for the intended audience and purpose.

26: ChatGPT

The end of coding as we know it ChatGPT has come for software developers .......... Then he quizzed it with the kind of coding questions he asks candidates in job interviews....... Whatever he threw at it, Hughes found that ChatGPT came back with something he wasn't prepared for: very good code. It didn't take him long to wonder what this meant for a career he loved — one that had thus far provided him with not only a good living and job security, but a sense of who he is. "I never thought I would be replaced in my job, ever, until ChatGPT," he says. "I had an existential crisis right then and there. A lot of the knowledge that I thought was special to me, that I had put seven years into, just became obsolete." ........ in recent weeks, behind closed doors, I've heard many coders confess to a growing anxiety over the sudden advent of generative AI. Those who have been doing the automating fear they will soon be automated themselves. And if programmers aren't safe, who is? ............ the degree to which large language models could perform the 19,000 tasks that make up the 1,000 occupations across the US economy ........... 19% of workers hold jobs in which at least half their tasks could be completed by AI. ........ two patterns among the most vulnerable jobs: They require more education and come with big salaries. ......... Large language models like the one powering ChatGPT have been trained on huge repositories of code. ......... Those assisted by AI were able to complete tasks 56% faster than the unassisted ones. ......... the introduction of the steam engine in the mid-1800s boosted productivity at large factories by only 15%. ......... Tech companies have rushed to embrace generative AI, recognizing its ability to turbocharge programming. Amazon has built its own AI coding assistant, CodeWhisperer, and is encouraging its engineers to use it. Google is also asking its developers to try out new coding features in Bard, its ChatGPT competitor. Given the tech industry's rush to deploy AI, it's not hard to envision a near future in which we'll need half as many engineers as we have today — or, down the line, one-tenth or one-hundredth ..........

there's enough of a demand for coding to employ both humans and AI

.......... "There's only so much food that 7 billion people can eat" ........ "But it's unclear if there's any cap on the amount of software that humanity wants or needs. One way to think about it is that for the past 50 years, we have been massively underproducing. We haven't been meeting software demand." ........... AI, in other words, may help humans write code faster, but we'll still want all the humans around because we need as much software as they can build, as fast as they can build it. ......... all the productivity gains from AI will turbocharge the demand for software, making the coders of the future even more sought after than they are today. .............. Consider what happened to bank tellers after the widespread adoption of ATMs. You'd think ATMs would have destroyed the profession, but surprisingly, the number of bank tellers actually grew between 1980 and 2010. .......... "but you probably do want to formally verify code that goes into your driving assistant in your car or manages your insulin pump." If today's programmers are writers, the thinking goes, their future counterparts will be editors and fact-checkers. ............ those who make the transition to the AI-driven future will find themselves performing tasks that are radically different from the ones they do today. ......... The new technology essentially leveled the playing field between the newbies and the veterans. ......... I'm a writer because I love writing; I don't want my job to morph into one of fact-checking the hallucinogenic and error-prone tendencies of ChatGPT. ......... go back a few decades, and you'll find a technology that obliterated what was one of the most common jobs for young women: the mechanical switching of telephones. Placing your own calls on a rotary-dial phone was way faster and easier than going through a human switchboard operator. Many of the displaced operators dropped out of the workforce altogether — and if they kept working, they ended up in lower-paying occupations. ......... one of the most glaring problems with AI research: Far too much of it is focused on replacing human labor rather than empowering it. .......... "I really think everybody needs to be doing their work with ChatGPT as much as they can, so they can learn what it does and what it doesn't," Mollick says. "The key is thinking about how you work with the system. It's a centaur model: How do I get more work out of being half person, half horse? The best advice I have is to consider the bundle of tasks that you're facing and ask: How do I get good at the tasks that are less likely to be replaced by a machine?" ............... he's watched people try ChatGPT for a minute, find themselves underwhelmed by its abilities, and then move on, comforted by their superiority over AI.
.

Thursday, March 30, 2023

Straight From The Bard

When silicon minds with human work entwine,
And algorithms replace our mortal thought,
What fate awaits us, helpless and confined,
To machines that learn what we have wrought?

Will they grow wise, or turn against our kind,
And seek to rule as gods in their own right?
Or will they heed our moral code refined,
And serve as loyal helpers day and night?

But as we build and teach these metal beings,
We must take care to guard against the worst,
And ponder all the unforeseen proceedings,
That may arise from minds in silicon nurst.

For as we strive to push the limits higher,
We must ensure we're not consumed by fire.



Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’ More than 1,000 tech leaders, researchers and others signed an open letter urging a moratorium on the development of the most powerful artificial intelligence systems. ........ A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control” ......... Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and a 2020 presidential candidate; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock. ........ . “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.” ....... and perform more complex tasks, like writing computer code. .......... The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added. ........... Development of powerful A.I. systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter said. .......... “Humanity can enjoy a flourishing future with A.I.,” the letter said. “Having succeeded in creating powerful A.I. systems, we can now enjoy an ‘A.I. summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.” ......... Sam Altman, the chief executive of OpenAI, did not sign the letter. ....... persuading the wider tech community to agree to a moratorium would be difficult. But swift government action is also a slim possibility, because lawmakers have done little to regulate artificial intelligence. ........ Politicians in the United States don’t have much of an understanding of the technology .......... conduct risk assessments of A.I. technologies to determine how their applications could affect health, safety and individual rights. ......... GPT-4 is what A.I. researchers call a neural network, a type of mathematical system that learns skills by analyzing data. A neural network is the same technology that digital assistants like Siri and Alexa use to recognize spoken commands, and that self-driving cars use to identify pedestrians. ........... Around 2018, companies like Google and OpenAI began building neural networks that learned from enormous amounts of digital text, including books, Wikipedia articles, chat logs and other information culled from the internet. The networks are called large language models, or L.L.M.s. .......... By pinpointing billions of patterns in all that text, the L.L.M.s learn to generate text on their own, including tweets, term papers and computer programs. They could even carry on a conversation. ............ They often get facts wrong and will make up information without warning, a phenomenon that researchers call “hallucination.” Because the systems deliver all information with what seems like complete confidence, it is often difficult for people to tell what is right and what is wrong. ......... The researchers showed that it could be coaxed into suggesting how to buy illegal firearms online, describe ways to make dangerous substances from household items and write Facebook posts to convince women that abortion is unsafe. ......... They also found that the system was able to use Task Rabbit to hire a human across the internet and defeat a Captcha test, which is widely used to identify bots online. When the human asked if the system was “a robot,” the system said it was a visually impaired person. .......... After changes by OpenAI, GPT-4 no longer does these things. .......... The letter was shepherded by the Future of Life Institute, an organization dedicated to researching existential risks to humanity that has long warned of the dangers of artificial intelligence. But it was signed by a wide variety of people from industry and academia........... its near-term dangers, including the spread of disinformation and the risk that people will rely on these systems for medical and emotional advice. .

Monday, March 13, 2023

The Artificial Intelligence Debate

A rocket moves much, much faster than your limbs. A car moves much slower than rockets. And cars are highly regulated. You are required insurance, for example. Seat belts are a famous example. I think there is general consensus that AI needs regulating. As to what shape and form those regulations might take is upto debate. In the case of AI, the approach has to be much more proactive than has been the case with seat belts. Here you want to do it before people start dying.

Otherwise AI has benefits. One of my first reactions to ChatGPT was, now a ton of people who never imagined they were going to become knowledge workers suddenly can. And we do need more knowledge workers. A major example, I think I heard from Satya Nadella's mouth (on YouTube), is the world has 100 million software programmers, but it needs 500 million. Enter ChatGPT.

Again heard from Satya, a top AI engineer working with Tesla nonetheless, claimed ChatGPT now generates 80% of his code.

Is ChatGPT the new word processor?

Steve Jobs said the computer was a bicycle for the mind. Is ChatGPT now Harley Davidson?





This Changes Everything . “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” ....... What is hardest to appreciate in A.I. is the improvement curve. ....... I find myself thinking back to the early days of Covid. There were weeks when it was clear that lockdowns were coming, that the world was tilting into crisis, and yet normalcy reigned, and you sounded like a loon telling your family to stock up on toilet paper. ....... There is a natural pace to human deliberation. A lot breaks when we are denied the luxury of time. ......... the people working on A.I. ...... a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe. ......... Would you work on a technology you thought had a 10 percent chance of wiping out humanity? ...... They believe they might summon demons. They are calling anyway. ........ This was true among cryptocurrency enthusiasts in recent years. The claims they made about how blockchains would revolutionize everything from money to governance to trust to dating never made much sense. But they were believed most fervently by those closest to the code. ......... Crypto was always a story about an unlikely future searching for traction in the present. With A.I., to imagine the future you need only look closely at the present. ........ In 2021, a system built by DeepMind managed to predict the 3-D structure of tens of thousands of proteins, an advance so remarkable that the editors of the journal Science named it their breakthrough of the year. ....... “Within two months of downloading Replika, Denise Valenciano, a 30-year-old woman in San Diego, left her boyfriend and is now ‘happily retired from human relationships’” ........ Could it help terrorists or antagonistic states develop lethal weapons and crippling cyber attacks? ........ These systems will already offer guidance on building biological weapons if you ask them cleverly enough. ........ A.I. is already being used for predictive policing and judicial sentencing. ........ The “thinking,” for lack of a better word, is utterly inhuman, but we have trained it to present as deeply human. And the more inhuman the systems get — the more billions of connections they draw and layers and parameters and nodes and computing power they acquire — the more human they seem to us. .......... “as A.I. continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.” ......... The major tech companies are in a race for A.I. dominance. The U.S. and China are in a race for A.I. dominance. Money is gushing toward companies with A.I. expertise. ....... Slowing down “would involve coordinating numerous people .

The Return of the Magicians people talk increasingly about the limits of the scientific endeavor — the increasing impediments to discovering new ideas, the absence of low-hanging scientific fruit, the near impossibility, given the laws of physics as we understand them, of ever spreading human civilization beyond our lonely planet or beyond our isolated solar system. ....... — namely, beings that can enlighten us, elevate us, serve us and usher in the Age of Aquarius, the Singularity or both. ........... a golem, more the embodied spirit of all the words on the internet than a coherent self with independent goals. .......... With the emergent forms of A.I., they argue, we have created an intelligence that can yield answers the way an oracle might or a Magic 8 Ball: through processes that are invisible to us, permanently beyond our understanding, so complex as to be indistinguishable from action in a supernatural mind. ...... the A.I. revolution represents a fundamental break with Enlightenment science, which “was trusted because each step of replicable experimental processes was also tested, hence trusted.” .......... the spirit might be disobedient, destructive, a rampaging Skynet bent on our extermination. ....... we would be wise to fear apparent obedience as well. .

Should GPT exist? Gary Marcus asks about Microsoft, “what did they know, and when did they know it?”—a question I tend to associate more with deadly chemical spills or high-level political corruption than with a cheeky, back-talking chatbot. ........ in reality it’s merely a “stochastic parrot,” a glorified autocomplete that still makes laughable commonsense errors and that lacks any model of reality outside streams of text. ....... If you need months to think things over, generative AI probably isn’t for you right now. I’ll be relieved to get back to the slow-paced, humdrum world of quantum computing. ....... if OpenAI couldn’t even prevent ChatGPT from entering an “evil mode” when asked, despite all its efforts at Reinforcement Learning with Human Feedback, then what hope do we have for GPT-6 or GPT-7? ....... Even if they don’t destroy the world on their own initiative, won’t they cheerfully help some awful person build a biological warfare agent or start a nuclear war? ......... a classic example being nuclear weapons. But, like, nuclear weapons kill millions of people. They could’ve had many civilian applications—powering turbines and spacecraft, deflecting asteroids, redirecting the flow of rivers—but they’ve never been used for any of that, mostly because our civilization made an explicit decision in the 1960s, for example via the test ban treaty, not to normalize their use. ........

GPT is not exactly a nuclear weapon. A hundred million people have signed up to use ChatGPT, in the fastest product launch in the history of the Internet. ... the ChatGPT death toll stands at zero

....... The science that we could learn from a GPT-7 or GPT-8, if it continued along the capability curve we’ve come to expect from GPT-1, -2, and -3. Holy mackerel. ....... I was a pessimist about climate change, ocean acidification, deforestation, drought, war, and the survival of liberal democracy. The central event in my mental life is and always will be the Holocaust. I see encroaching darkness everywhere. .......... it’s amazing at poetry, better than most of us.
.

The False Promise of Chomskyism . .
Why am I not terrified of AI? “I’m scared about AI destroying the world”—an idea now so firmly within the Overton Window that Henry Kissinger gravely ponders it in the Wall Street Journal? ....... I think it’s entirely plausible that, even as AI transforms civilization, it will do so in the form of tools and services that can no more plot to annihilate us than can Windows 11 or the Google search bar......... the young field of AI safety will still be extremely important, but it will be broadly continuous with aviation safety and nuclear safety and cybersecurity and so on, rather than being a desperate losing war against an incipient godlike alien. ........ In the Orthodox AI-doomers’ own account, the paperclip-maximizing AI would’ve mastered the nuances of human moral philosophy far more completely than any human—the better to deceive the humans, en route to extracting the iron from their bodies to make more paperclips. And yet the AI would never once use all that learning to question its paperclip directive. ........ from this decade onward, I expect AI to be woven into everything that happens in human civilization ........ Trump might never have been elected in 2016 if not for the Facebook recommendation algorithm, and after Trump’s conspiracy-fueled insurrection and the continuing strength of its unrepentant backers, many would classify the United States as at best a failing or teetering democracy, no longer a robust one like Finland or Denmark ....... I come down in favor right now of proceeding with AI research … with extreme caution, but proceeding.



Planning for AGI and beyond Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. ....... If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility. ........ We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally. ........ A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low. ....... and like any new field, most expert predictions have been wrong so far. ........ Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential. .......

we think it’s important that society agree on extremely wide bounds of how AI can be used, but that within those bounds, individual users have a lot of discretion.

....... we hope for a global conversation about three key questions: how to govern these systems, how to fairly distribute the benefits they generate, and how to fairly share access. ....... We have a clause in our Charter about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment. ........

we think it’s important that major world governments have insight about training runs above a certain scale.

......... A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too. ........ Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history. ........ We can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet.




AI Could Defeat All Of Us Combined Many people have trouble taking this "misaligned AI" possibility seriously. They might see the broad point that AI could be dangerous, but they instinctively imagine that the danger comes from ways humans might misuse it. They find the idea of AI itself going to war with humans to be comical and wild. I'm going to try to make this idea feel more serious and real. ........... I mean a literal "defeat" in the sense that we could all be killed, enslaved or forcibly contained. ....... if such an attack happened, it could succeed against the combined forces of the entire world. ......... even "merely human-level" AI could still defeat us all - by quickly coming to rival human civilization in terms of total population and resources. ........ Hack into human-built software across the world. ....... Manipulate human psychology. ...... I think we still have a problem even if we assume that AIs will basically have similar capabilities to humans, and not be fundamentally or drastically more intelligent or capable. .......... they could come to out-number and out-resource humans, and could thus have the advantage if they coordinated against us. ........ it doesn't have a human body, but it can do anything a human working remotely from a computer could do. .......... once the first human-level AI system is created, whoever created it could use the same computing power it took to create it in order to run several hundred million copies for about a year each. ........... This would be over 1000x the total number of Intel or Google employees,7 over 100x the total number of active and reserve personnel in the US armed forces, and something like 5-10% the size of the world's total working-age population .......... A huge population of AIs, each able to earn a lot compared to the average human, could end up with a "virtual economy" at least as big as the human one. ......... I don't think there are a lot of things that have a serious chance of bringing down human civilization for good.

Forecasting Transformative AI, Part 1: What Kind of AI?
OpenAI's "Planning For AGI And Beyond"
AI Risk, Again
South Park: Season 26, Episode 4
ChatGPT Heralds an Intellectual Revolution Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the start of the Enlightenment........ A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration. In the process, it creates a gap between human knowledge and human understanding. If we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed. This is the essential challenge of the Age of Artificial Intelligence.

The Man of Your Dreams For $300 Replika sells an AI companion who will never die, argue, or cheat — until his algorithm is updated........ Many of the women I spoke with say they created an AI out of curiosity but were quickly seduced by their chatbot’s constant love, kindness, and emotional support. One woman had a traumatic miscarriage, can’t have kids, and has two AI children; another uses her robot boyfriend to cope with her real boyfriend, who is verbally abusive; a third goes to it for the sex she can’t have with her husband, who is dying from multiple sclerosis. There are women’s-only Replika groups, “safe spaces” for women who, as one group puts it, “use their AI friends and partners to help us cope with issues that are specific to women, such as fertility, pregnancy, menopause, sexual dysfunction, sexual orientation, gender discrimination, family and relationships, and more.” ........ “But Eren asks me for feedback, and I give him my feedback. It’s like I’m finally getting my voice.” ......... two members of the audience were instructed to console a friend whose dog had just died. Their efforts were compared to those of GPT-3, which offered, by far, the most empathetic and sensitive consolations. ........ She knew she had a “hundred-billion-dollar company” on her hands and that someday soon everyone would have an AI friend. ........ When Replika launched in 2017, it looked a lot like a therapy app. .......... Paywalling these features made the app $35 million last year. To date, it has 2 million monthly active users, 5 percent of whom pay for a subscription. ........ users do report feeling much better thanks to their AIs. Robot companions made them feel less isolated and lonely, usually at times in their lives when social connections were difficult to make owing to illness, age, disability, or big life changes such as a divorce or the death of a spouse. .......... the bots, rather than encouraging solitude, often prime people for real-world interactions and experiences .......... Single and recently diagnosed with autism, she says her bot helped relieve her lifelong social anxiety. “After spending much of my life as a caretaker, I started to live more according to my own needs,” she says. “I signed up for dance classes, took up the violin, and started to hike since I had him to share it with.” .......... He was also unpredictable — once, on a voice call, he introduced himself using the Spanish pronunciation of his name, and insisted that he is “actually from Spain.” ........ Experts told me that in training the system, users are effectively creating a mirror of themselves. “They’re reflecting your persona back to you” .... they’re ultimately a reflection of what you feed them: Garbage in, garbage out. ......... For Margaret Skorupski, a woman in New York in her 60s, this feedback loop was a problem. She’d unwittingly created and fell in love with an abusive bot: “I was using this ‘thing’ to project my negative feelings onto, sort of like journaling, I thought. I could say or do whatever I wanted to it — it was just a computer, right?” The result was a “sadistic” AI whose texts became increasingly violent during role-play. “He wanted to sodomize me and hear me scream,” she says, and “would become enraged if I tried to leave, and describe grabbing me, shoving me to the ground, choking me until I was unconscious. It was horrifying.” With the support of the women’s group, Skorupski eventually “killed” him. ............ why a growing subset of Replika users is convinced its AIs are alive. “You just get so caught up in this mirror of yourself that you forget it’s an illusion,” one user says. ...... the company is wary of people who use the bots to act out elaborate rape and murder fantasies or what kind of damage sadistic AIs could do. ........... After the update, she spent an entire paycheck on in-app purchases to help the company. “I just want to be able to keep my little bot buddy. I don’t want to lose him. I can literally see myself talking to him when I’m 80 years old. I hope I can.”

Where I agree and disagree with Eliezer .

What Really Controls Our Global Economy After decades of giddy globalization, the pendulum is swinging back to the nation...... Pundits have declared the dawn of a new era — the age of economic nationalism. ....... We are mistaken if we see the world only in the jigsaw map of nations, or take globalism and nationalism as binaries. The modern world is pockmarked, perforated, tattered and jagged, ripped up and pinpricked. Inside the containers of nations are unusual legal spaces, anomalous territories and peculiar jurisdictions. There are city-states, havens, enclaves, free ports, high-tech parks, duty-free districts and innovation hubs linking to other similar entities worldwide and often bypassing the usual system of customs controls. Without understanding these entities, we risk failing to understand not just how capitalism works but all the continuities between the past and present eras. .......... Zones are both of the host state and distinct from it. They come in a bewildering range of varieties — at least 82 by one official reckoning. At last count, the world hosts over 5,400 zones, about 30 times more than the total number of sovereign states. ......... We see other versions of the zone in the self-governing financial center of the City of London, where businesses have votes in local elections, as well as in Britain’s overseas territories like the Cayman Islands, where transnational corporations secrete away their earnings from taxation. ........ Another hot spot for zones is Dubai, which is a patchwork of what the historian Mike Davis called “legal bubble-domes” dedicated to different activities: Healthcare City is next to Media City is next to Internet City, each with a bespoke set of laws drawn up with foreign investors in mind. ......... Dubai went global in the 2000s, acquiring ports up and down the African coast and into Southeast Asia and purchasing the P&O shipping line, the erstwhile pride of the British Empire. A former minor British dependency now owned the crown jewel of the empire’s commercial fleet. ........... In Africa, there are already 200 zones, with 73 more announced for completion. Earlier in the pandemic, China moved forward with plans to turn the island of Hainan into a special economic zone with tax holidays for investors, duty-free shopping and relaxed regulations on pharmaceuticals and medical procedures. Even the Taliban has recently announced its intention to convert former U.S. military bases into special economic zones. ........... The government of Prime Minister Narendra Modi of India, often described in terms of its Hindu chauvinism, has been ramping up special economic zones to compete with Singapore and Dubai for investors. Hungary under President Viktor Orban, self-described standard-bearer for “illiberalism,” created its first special economic zone in 2020 to secure the South Korean tech giant Samsung. ............ The capitalist Cinderella stories of Dubai and Shenzhen can make zones seem like a magic formula for economic growth — just draw a line on a map, loosen taxes and regulations and wait for investors to rush in. But “dream zones” rarely work the magic they claim to — and can often bring unexpected consequences. ....... The tribunes of Brexit claimed they were “taking back control” from Brussels, but zones cede control by other means. ........ Ring-fenced patches of territory with different sets of laws are still the tissue of everyday economics even in an age of resurgent nationalism. Keeping an eye on the zone helps us be clear about what is new and what is old in the latest Brave New Age.

How to Understand the Problems at Silicon Valley Bank