Thursday, March 30, 2023

30: GPT-4



Don't trap me in a chat window Today's announcement video of Copilot for Microsoft Office shows a great integration of the chat experience in Excel. I'm really looking forward to what they do next. .



Can New York Fix Its Housing Crisis? It Depends on the Suburbs The governor’s quest, to force suburbs around New York City to build more housing, is meeting with resistance........ some Scarsdale residents complained that new residents could strain schools and burden taxpayers. ......... Resistance to bigger development is a familiar dynamic in suburbs like Scarsdale, where single-family homes and sprawl are distinctive features. .............. a mission to get 800,000 units built over the next decade and ease the state’s housing crisis. ........ The New York City suburbs are considered the birthplace of American suburbia: New Yorkers began moving in droves to communities in Westchester County and on Long Island to escape urban life beginning in the 20th century. .

Straight From The Bard

When silicon minds with human work entwine,
And algorithms replace our mortal thought,
What fate awaits us, helpless and confined,
To machines that learn what we have wrought?

Will they grow wise, or turn against our kind,
And seek to rule as gods in their own right?
Or will they heed our moral code refined,
And serve as loyal helpers day and night?

But as we build and teach these metal beings,
We must take care to guard against the worst,
And ponder all the unforeseen proceedings,
That may arise from minds in silicon nurst.

For as we strive to push the limits higher,
We must ensure we're not consumed by fire.



Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’ More than 1,000 tech leaders, researchers and others signed an open letter urging a moratorium on the development of the most powerful artificial intelligence systems. ........ A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control” ......... Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and a 2020 presidential candidate; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock. ........ . “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.” ....... and perform more complex tasks, like writing computer code. .......... The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added. ........... Development of powerful A.I. systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter said. .......... “Humanity can enjoy a flourishing future with A.I.,” the letter said. “Having succeeded in creating powerful A.I. systems, we can now enjoy an ‘A.I. summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.” ......... Sam Altman, the chief executive of OpenAI, did not sign the letter. ....... persuading the wider tech community to agree to a moratorium would be difficult. But swift government action is also a slim possibility, because lawmakers have done little to regulate artificial intelligence. ........ Politicians in the United States don’t have much of an understanding of the technology .......... conduct risk assessments of A.I. technologies to determine how their applications could affect health, safety and individual rights. ......... GPT-4 is what A.I. researchers call a neural network, a type of mathematical system that learns skills by analyzing data. A neural network is the same technology that digital assistants like Siri and Alexa use to recognize spoken commands, and that self-driving cars use to identify pedestrians. ........... Around 2018, companies like Google and OpenAI began building neural networks that learned from enormous amounts of digital text, including books, Wikipedia articles, chat logs and other information culled from the internet. The networks are called large language models, or L.L.M.s. .......... By pinpointing billions of patterns in all that text, the L.L.M.s learn to generate text on their own, including tweets, term papers and computer programs. They could even carry on a conversation. ............ They often get facts wrong and will make up information without warning, a phenomenon that researchers call “hallucination.” Because the systems deliver all information with what seems like complete confidence, it is often difficult for people to tell what is right and what is wrong. ......... The researchers showed that it could be coaxed into suggesting how to buy illegal firearms online, describe ways to make dangerous substances from household items and write Facebook posts to convince women that abortion is unsafe. ......... They also found that the system was able to use Task Rabbit to hire a human across the internet and defeat a Captcha test, which is widely used to identify bots online. When the human asked if the system was “a robot,” the system said it was a visually impaired person. .......... After changes by OpenAI, GPT-4 no longer does these things. .......... The letter was shepherded by the Future of Life Institute, an organization dedicated to researching existential risks to humanity that has long warned of the dangers of artificial intelligence. But it was signed by a wide variety of people from industry and academia........... its near-term dangers, including the spread of disinformation and the risk that people will rely on these systems for medical and emotional advice. .

Tuesday, March 28, 2023

The AI Arms Race Is On

BURNING ROCKET DEBRIS LEAVE EPIC STREAK ACROSS FLORIDA SKY PARTIERS IN MIAMI CAUGHT A HECK OF A VIDEO.
REPLIKA USERS REJOICE! EROTIC ROLEPLAY IS BACK IN AI-POWERED APP THE COMPANY IS GIVING USERS "THEIR PARTNERS BACK EXACTLY THE WAY THEY WERE."
AI Seems to Do Better on Tasks When Asked to Reflect on Its Mistakes If at first you don't succeed... large language models (LLM) might be able to learn from their own mistakes — just like humans. ...... Teaching them to do so, they say, might be able to push AI technologies into a new phase of autonomous problem-solving. ....... their methodology dubbed "Reflexion" is a framework for teaching AI models via prompts to apply a trial-and-error technique to their outputs. ........ when it messed up, it was prompted with the Reflexion technique to find those mistakes for itself — a process that they claim helps the program evolve, just like humans. ......... "To achieve full automation, we introduce a straightforward yet effective heuristic that enables the agent to pinpoint hallucination instances, avoid repetition in action sequences, and, in some environments, construct an internal memory map of the given environment," the researchers write in their paper. ......... "To achieve full automation, we introduce a straightforward yet effective heuristic that enables the agent to pinpoint hallucination instances, avoid repetition in action sequences, and, in some environments, construct an internal memory map of the given environment," the researchers write in their paper......... "To achieve full automation, we introduce a straightforward yet effective heuristic that enables the agent to pinpoint hallucination instances, avoid repetition in action sequences, and, in some environments, construct an internal memory map of the given environment," the researchers write in their paper........



OpenAI CEO Warns That Competitors Will Make AI That’s More Evil "There will be other people who don't put some of the safety limits that we put on it." ....... "Society, I think, has a limited amount of time to figure out how to react to that," he continued. "How to regulate that, how to handle it." ....... .

It's hard to argue that we're not in an AI arms race

, and in that competitive and breakneck landscape, a lot of companies and superpowers out there are likely to prioritize both power and profit over safety and ethics. It's also true that AI tech is rapidly outpacing government regulation, despite the many billions being poured into the software. No matter how you shake it, that's a dangerous combination. ....... (And some, like OpenAI's largest partner Microsoft, allegedly tested the extremely chaotic Bing AI in India, ran into serious problems, and then released it in the US anyway.) ........... (And some, like OpenAI's largest partner Microsoft, allegedly tested the extremely chaotic Bing AI in India, ran into serious problems, and then released it in the US anyway.) ............ (And some, like OpenAI's largest partner Microsoft, allegedly tested the extremely chaotic Bing AI in India, ran into serious problems, and then released it in the US anyway.) ........... (And some, like OpenAI's largest partner Microsoft, allegedly tested the extremely chaotic Bing AI in India, ran into serious problems, and then released it in the US anyway.)
.

STANFORD SCIENTISTS PRETTY MUCH CLONED OPENAI'S GPT FOR A MEASLY $600 WE LOVE TO SEE IT. Stanford's Center for Research on Foundation Models announced last week that its researchers had "fine-tuned" Meta’s LLaMA 7B large language model (LLM) using OpenAI's GPT API — and for a bargain basement price. ........ the Stanford CRFM scientists said they spent "less than $500" on OpenAI's API and "less than $100" on LLaMA, based on the amount of time the researcher spent training Alpaca using the proprietary models......... All the same, Alpaca does, as the Stanford CRFM folks note, suffer from "several common deficiencies of language models, including

hallucination, toxicity, and stereotypes

," with hallucination being of particular concern


MICROSOFT'S STUNNING COPILOT AI DEMO COULD CHANGE OFFICE WORK FOREVER IMAGINE CLIPPY... ON HULK-LIKE STEROIDS. Still riding high on the success of integrating ChatGPT in Bing, Microsoft just announced that its GPT-4-powered Copilot is coming to Office 365 apps. With it, users will be able to generate entire Word documents, Excel spreadsheets, Outlook emails, and PowerPoint presentations with a click of a button, horizontally integrating all those apps (along with Microsoft Teams)........ "Copilot is a whole new way of working." ......... The tech giant's Copilot wants to be your AI-powered secretary, nagging you about a meeting you're dreading, informing you of a new hire, or even snitching on coworkers who were supposed to be back in the office after their vacation. ........ Think of this as the Tesla Self-Driving of office work: Sure, you can fall asleep at the wheel, and it might get you home, but it might also cause an eight-vehicle crash. ......... In short, Clippy's AI-powered manic cousin is about to come cannonballing into your office life and either deliver your salvation from the most boring, soul-crushing work you face every day, or set a garbage fire to your career as an anarchic agent of chaos under the guise of a helpful productivity tool.

AI-imager Midjourney v5 stuns with photorealistic images—and 5-fingered hands "Lack of dopamine hits, because the results are too perfect every time." photorealistic images at a quality level that some AI art fans are calling creepy and "too perfect." ......... "MJ v5 currently feels to me like finally getting glasses after ignoring bad eyesight for a little bit too long," said Julie Wieland, a graphic designer who often shares her Midjourney creations on Twitter. "Suddenly you see everything in 4k, it feels weirdly overwhelming but also amazing." ......... Over the past year, the idea that AI art generators can't render hands correctly has become something of a cultural trope. Notably, Midjourney v5 can generate realistic human hands fairly well. "Hands are correct most of the time, with 5 fingers instead of 7-10 on one hand," said Wieland........ and offering a 2x increase in image resolution.

Chaos Inside Google as Execs Try to Figure Out How to Actually Use AI "It's an intense time." ....... But Google seriously fumbled the feature's launch, with the bot's first advertisement accidentally showcasing the bot's inability to find and present accurate information to users. Google's stock nosedived as a result, leading the company to lose $100 billion in a day. .......... failed to effectively articulate what Bard is actually supposed to be........ "Bard and ChatGPT are large language models, not knowledge models," one Googler asked execs. "They are great at generating human-sounding text, they are not good at ensuring their text is fact-based. Why do we think the big first application should be Search, which at its heart is about finding true information?" .......... "The magic that we're finding in using the product is really around being this creative companion to helping you be the sparkplug for imagination, explore your curiosity, etc." ......... Google's real problem: regardless of the fact that no one's tech was or is really ready for mass consumption, OpenAI and Microsoft beat Google to the punch.

LEVI'S MOCKED FOR USING AI TO GENERATE "DIVERSE" DENIM MODELS "I'M SO SICK OF THIS SH*T." "AI generated models huh?" games journalist Jefferey Rousseau tweeted. "I guess all the people of color just don't exist anymore." ........ "Levi’s using AI to generate 'more diverse models' instead of, you know, actually hiring more diverse models is exactly the kind of crap you’d expect from this industry"

GENE HACKERS CREATE MEATBALL FROM RESURRECTED MAMMOTH MEAT WOULD YOU EAT IT?