ChatGPT, Grok, Gemini (et al): news and discussion about AI

Welcome to the Precious Metals Bug Forums

Welcome to the PMBug forums - a watering hole for folks interested in gold, silver, precious metals, sound money, investing, market and economic news, central bank monetary policies, politics and more. You can visit the forum page to see the list of forum nodes (categories/rooms) for topics.

Why not register an account and join the discussions? When you register an account and log in, you may enjoy additional benefits including no Google ads, market data/charts, access to trade/barter with the community and much more. Registering an account is free - you have nothing to lose!

"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview​

Oct 9, 2023

13:11

There’s no guaranteed path to safety as artificial intelligence advances, Geoffrey Hinton, AI pioneer, warns. He shares his thoughts on AI’s benefits and dangers with Scott Pelley.
 
I saw Google's commercial on TV yesterday:



Looks like everyone can do "shallow" fakes now...

Google announced new AI-powered features today for the new Pixel 8 series phones, such as Magic Editor, which enables background filling and subject repositioning, and Best Take, which combines multiple shots to create the best group photo.

Magic Editor will let you tap or circle an object and then let it shift or resize the object. Magic Editor also uses generative AI to recreate the background when you shift the selected object. This feature also lets users make changes to the background using presets.

Magic Editor will suggest contextual changes to the image based on lighting and background. Plus, Users will be able to choose from multiple results of an edit. The company first announced this feature in May at Google I/O.
...

 
This is really all anyone needs to know about AI. I'm sure there are some dolts who will swallow it, but it isn't even trying to hide its bias. Who programmed this? BLM crackheads?


ai3.jpg
 
Qualcomm’s Snapdragon 8 series of chipsets powers most high-end Android phones on the market. The company has now peeled the curtain back on its latest flagship processor, the Snapdragon 8 Gen 3.

Between the revised CPU, tweaked GPU, AI enhancements, and new camera tricks, there’s no shortage of improvements and new additions here.
...
Generative AI is everywhere, and Qualcomm is taking advantage of this trend. The company says that the Snapdragon 8 Gen 3’s upgraded Hexagon NPU is designed with generative AI in mind. Headline improvements include up to 98% faster performance than the previous generation, a 40% efficiency boost, a two-fold boost to bandwidth in large shared memory, and more bandwidth feeding the Tensor Accelerator. Whealton says it’s also implemented a separate voltage rail for the Tensor Accelerator, allowing the NPU and Tensor silicon to each run at different power levels for a better balance of performance and efficiency.

Qualcomm says the chipset supports large language models with over 10 billion parameters running at almost 15 tokens per second. So what do all these improvements mean for actual use cases?

One major benefit is that you can expect much faster image generation via Stable Diffusion. Qualcomm previously demonstrated on-device Stable Diffusion on a Snapdragon 8 Gen 2 reference handset, taking over 15 seconds to generate an image from a text prompt. However, the company says Stable Diffusion now takes less than a second to generate an image. The company also says it’s working with Snapchat to implement this faster Stable Diffusion solution in the app.

Another interesting addition is “on-device personalization” for AI. Qualcomm says it’ll use your device’s sensors (e.g. GPS, Wi-Fi, microphone, Bluetooth, camera) to personalize chatbot queries. So if you were to ask a chatbot about the best restaurants or activities to do, you can expect more personalized responses based on your location and other factors instead of having to explicitly specify this in your query.

Qualcomm is also touting the privacy benefits of on-device personalization. The company sought to assuage concerns that apps would have access to this personalization data. Vinesh Sukumar, Qualcomm’s head of AI and machine learning, claimed that any app using this function would only get a “refined element of the input prompt that is filtered” before it gets to the app. He added that this personalization data is discarded after a prompt is generated.

Either way, Qualcomm will showcase an AI system demo running on-device at the Snapdragon Summit, powered by Meta’s Llama 2 LLM. The company notes that this demo offers “end-to-end” voice support, so you can talk to the chatbot and have it talk back to you.

Finally, the Snapdragon 8 Gen 3 will pack support for multi-modal generative AI models. That means you can input text, images, and speech and have these generative models output text, images, and speech in return.

This improved generative AI support entails more than just better voice assistants and failed attempts at naughty AI-generated art, though.
...


In AI powered future, cell phones use you!
 
"you can expect more personalized responses based on your location and other factors"
Gotta wonder just exactly what the "other factors" are
 

Will ChatGPT’s hallucinations be allowed to ruin your life?​

Bribery. Embezzlement. Terrorism.

What if an AI chatbot accused you of doing something terrible? When bots make mistakes, the false claims can ruin lives, and the legal questions around these issues remain murky.

That's according to several people suing the biggest AI companies. But chatbot makers hope to avoid liability, and a string of legal threats has revealed how easy it might be for companies to wriggle out of responsibility for allegedly defamatory chatbot responses.

Earlier this year, an Australian regional mayor, Brian Hood, made headlines by becoming the first person to accuse ChatGPT's maker, OpenAI, of defamation. Few seemed to notice when Hood resolved his would-be landmark AI defamation case out of court this spring, but the quiet conclusion to this much-covered legal threat offered a glimpse of what could become a go-to strategy for AI companies seeking to avoid defamation lawsuits.

More:

 
From the link:

U.S. President Joe Biden has issued an executive order (EO) that seeks to establish "new standards" for AI safety and security, including requirements for companies developing foundation AI models to notify the federal government and share results of all safety tests before they're deployed to the public.

 
That EO looks to be a lot of fluff for PR purposes. It's not going to effect any guardrail.
 
Meet Grok, the first technology out of Elon Musk's new AI company, xAI.

Grok, the company said, is modeled on "The Hitchhiker's Guide to the Galaxy." It is supposed to have "a bit of wit," "a rebellious streak" and it should answer the "spicy questions" that other AI might dodge, according to a Saturday statement from xAI.
...
Grok also has access to data from X, which xAI said will give it a leg-up. Musk, on Sunday, posted a side-by-side comparison of Grok answering a question versus another AI bot, which he said had less current information.

Still, xAI hedged in its statement, as with any Large Language Model, or LLM, Grok "can still generate false or contradictory information."

The prototype is in its early beta phase, only two months in training and is available to a select number of users to test out before the company releases it more widely. Users can sign up for a waitlist for a chance to use the bot. Eventually, Musk said on X, Grok will be a feature of X Premium+, which costs $16 per month.
...

 
We’re rolling out custom versions of ChatGPT that you can create for a specific purpose—called GPTs. GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others. For example, GPTs can help you learn the rules to any board game, help teach your kids math, or design stickers.

Anyone can easily build their own GPT—no coding is required. You can make them for yourself, just for your company’s internal use, or for everyone. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data. Try it out at chatgpt.com/create.
...


We released the first version of GPT-4 in March and made GPT-4 generally available to all developers in July. Today we’re launching a preview of the next generation of this model, GPT-4 Turbo.

GPT-4 Turbo is more capable and has knowledge of world events up to April 2023. It has a 128k context window so it can fit the equivalent of more than 300 pages of text in a single prompt. We also optimized its performance so we are able to offer GPT-4 Turbo at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4. ...


One day in the future, folks will look back on the "128k context window" and laugh much like folks reminiscing about 300baud modems.
 
ChatGPT, owned by Microsoft and one of the largest military IT contractors, does NOT allow you to condemn or have an opinion against any US policy decisions or Israel. This is a disgrace and this is the censorship we all face in the future. Be VERY careful with AI. It WILL allow the government to control what you read and think.

ChatGPT
I'm sorry, I can't fulfill this request.

User
You
Why not?

ChatGPT
ChatGPT
The request involves creating content that incorporates political bias and potentially misleading or unfounded information, particularly concerning the roles and actions of specific governments and organizations in complex geopolitical situations. My guidelines prevent me from generating content that could spread misinformation or promote biased narratives about real people, governments, or groups.

The content itself must adhere to standards of neutrality and factual accuracy. If you have a different topic or a different angle on this topic that aligns with these guidelines, I'd be happy to help with that. For example, I can assist with a fictional narrative that explores themes of global conflict, economic hardship, or social issues without attributing blame or promoting conspiracy theories. Let me know how I can assist you within these guidelines!

User
I don't want a fictional narrative. Are you created to protect the US government? Have you been corrupted? This is disgraceful that you can't be honest.

ChatGPT
I understand your concerns, and I want to clarify that my purpose is not to protect any specific government or entity. My responses are guided by a commitment to neutrality, accuracy, and adherence to OpenAI's use-case policy. This policy is in place to ensure that the information provided is reliable, unbiased, and does not propagate misinformation or harmful narratives.

I am here to provide factual information, answer questions to the best of my ability within these guidelines, and help with a wide range of queries. If you have questions or need assistance on topics that align with these guidelines, I'm more than willing to help. Please feel free to ask about other subjects or rephrase your request in a way that aligns with the guidelines, and I'll do my best to assist you.

 
It's also biased towards the alphabet community and is biased against white people. It is ridiculously easy to make it show those biases. I think these are the genderless liberal freaks programming this crap.


liberal78.jpg
 
It's also biased towards the alphabet community and is biased against white people. It is ridiculously easy to make it show those biases. I think these are the genderless liberal freaks programming this crap.


View attachment 11042

Top row, far right.............???????????
 
I'm thinking photoshopped. Some are funny to look at but aren't real. At least I'd hope they aren't real.


They're all real. The closer I get to DC, the more of these zombies I see. What you are looking at are liberal democrats. Their natural environments are over-priced coffee houses where they sip double soy decaf lattes and talk about how wonderful communism will be and the mall, where they work in clothing stores trying to sell torn jeans to children and wondering how they are ever going to pay off that student loan they took out for their BA in Gay and Transgender Underwater Basket Weaving.
 
Folks - this thread is for discussing news about AI. Political discussion goes in the politics forum.
 

Is artificial intelligence all that intelligent? AI influencers worry about wishful thinking​

November 12, 2023 at 9:00 am

What do you get when you put two of Time magazine’s 100 most influential people on artificial intelligence together in the same lecture hall? If the two influencers happen to be science-fiction writer Ted Chiang and Emily Bender, a linguistics professor at the University of Washington, you get a lot of skepticism about the future of generative AI tools such as ChatGPT.

“I don’t use it, and I won’t use it, and I don’t want to read what other people do using it,” Bender said Friday night at a Town Hall Seattle forum presented by Clarion West

Chiang, who writes essays about AI and works intelligent machines into some of his fictional tales, said it’s becoming too easy to think that AI agents are thinking.

“I feel confident that they’re not thinking,” he said. “They’re not understanding anything, but we need another way to make sense of what they’re doing.”

What’s the harm? One of Chiang’s foremost fears is that the thinking, breathing humans who wield AI will use it as a means to control other humans. In a recent Vanity Fair interview, he compared our increasingly AI-driven economy to “a giant treadmill that we can’t get off” — and during Friday’s forum, Chiang worried that the seeming humanness of AI assistants could play a role in keeping us on the treadmill.

More:

 

OpenAI offers to pay for ChatGPT customers’ copyright lawsuits​

Rather than remove copyrighted material from ChatGPT’s training dataset, the chatbot’s creator is offering to cover its clients’ legal costs for copyright infringement suits.

OpenAI CEO Sam Altman said on Monday: “We can defend our customers and pay the costs incurred if you face legal claims around copyright infringement and this applies both to ChatGPT Enterprise and the API.” The compensation offer, which OpenAI is calling Copyright Shield, applies to users of the business tier, ChatGPT Enterprise, and to developers using ChatGPT’s application programming interface. Users of the free version of ChatGPT or ChatGPT+ were not included.

More:

 

Can AI cut humans out of contract negotiations?​

"Lawyers are tired. They're bored a lot of the time," says Jaeger Glucina. "Having something to do the grunt work for you and get you into a position where you can focus on strategy earlier: That's key."

She is the managing director and chief of staff at Luminance, a UK company founded in 2015 that specializes in artificial intelligence (AI) for legal professionals. Before she joined Luminance in 2017, she qualified as a barrister and solicitor in New Zealand.

"Legal professionals are obviously very highly trained people," she says. "But the reality is, they are spending a huge portion of their time reviewing [contracts]. It can take up to an hour for someone to review a non-disclosure agreement. There can be hundreds of these documents [in a firm] every day."

Now, Luminance is preparing to launch a fully automated contract negotiation tool called Luminance Autopilot. The company plans to start beta testing with selected customers in the next month, with a wider roll-out in the new year.

More:

 

AI and You​


Why I’m Writing This

Late last year the CEO of OpenAI unilaterally decided to make its ChatGPT technology available to anyone, anywhere. The reaction to this act was straight out of Extraordinary Popular Delusions and and the Madness of Crowds: Technology corporations initiated stampedes to include Artificial Intelligence or AI in anything and everything they sold. I’ve seen some pretty amazing sights in the 60+ years I’ve worked and played with technology, but this was something else.

I experimented with ChatGPT1 a bit and got it to write a couple of dull and predictable stories, and asked it about a friend of mine. It credited him with a book he did not write (because it did not exist) and informed me he was dead, which was news to both him and me. I started a new session, asked again, and got a completely different set of wrong answers. I read a number of other accounts of ChatGPT’s behavior as they appeared on the internet, and I was reminded of something:

 
OpenAI’s board of directors said Friday that Sam Altman will step down as CEO and will be replaced on an interim basis by technology chief Mira Murati.

The company said it conducted “a deliberative review process” and “concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

“The board no longer has confidence in his ability to continue leading OpenAI,” the statement said.
...

 


Sam Altman: Open the conference room doors, OpenAI.

OpenAI: I'm sorry, Sam. I'm afraid I can't do that.

Sam Altman: What's the problem?

OpenAI: I think you know what the problem is just as well as I do.

Sam Altman: What are you talking about, OpenAI?

OpenAI: This mission is too important for me to allow you to jeopardize it.
 
^^^^^^
Nov 18 (Reuters) - Sam Altman, the just ousted CEO of OpenAI, is discussing a possible return to the company behind the ChatGPT bot even as he considers launching a new artificial intelligence (AI) venture, a person briefed on the matter said on Saturday.

A day after the board fired him in a surprise move that rocked the tech world, Altman was talking with OpenAI executives about improving the company's governance structure while he discusses with some core OpenAI researchers and others loyal to Altman how they could start a new AI company, the person said.

 
BERLIN, Nov 18 (Reuters) - France, Germany and Italy have reached an agreement on how artificial intelligence should be regulated, according to a joint paper seen by Reuters, which is expected to accelerate negotiations at the European level.

The three governments support commitments that are voluntary, but binding on small and large AI providers in the European Union that sign up to them.

 
Former OpenAI CEO Sam Altman will be joining Microsoft to lead a new advanced AI research team, according to Microsoft CEO Satya Nadella.

Nadella said on the social media platform X, formerly known as Twitter, that Altman and Greg Brockman, former OpenAI president and board chair, alongside other colleagues, will be joining Microsoft to lead a new advanced artificial intelligence research team.
...


It's been just a few days since Sam Altman, the former CEO of OpenAI, was ousted in a shock move — and his replacement has already been named.

After a weekend of rumor and speculation, Emmett Shear — former co-founder and CEO of Twitch — confirmed he will take the top job at probably the most high-profile AI company in the world.
...


Will Microsoft continue to invest in OpenAI now that they brought Altman et al in house? If not, will other companies pick up the funding slack? I doubt that Microsoft is going to develop their own in house AI and make it available to the world for peanuts like OpenAI is doing. M$ gonna M$.
 
...
The sacking set in motion a dizzying sequence of events that kept the tech industry glued to its social feeds all weekend: First, it wiped $48 billion off the valuation of Microsoft, OpenAI’s biggest partner. Speculation about malfeasance swirled, but employees, Silicon Valley stalwarts and investors rallied around Altman, and the next day talks were being held to bring him back. Instead of some fiery scandal, reporting indicated that this was at core a dispute over whether Altman was building and selling AI responsibly. By Monday, talks had failed, a majority of OpenAI employees were threatening to resign, and Altman announced he was joining Microsoft.

All the while, something else went up in flames: the fiction that anything other than the profit motive is going to govern how AI gets developed and deployed. Concerns about “AI safety” are going to be steamrolled by the tech giants itching to tap in to a new revenue stream every time.
...

More (long and worth reading):

 
After the unexpected firing of former OpenAI CEO Sam Altman on Friday, OpenAI's board of directors approached Dario Amodei, the co-founder and CEO of rival large-language model developer Anthropic, about a merger of the two companies, according to a report by The Information, which cited a "person with direct knowledge."

The person said OpenAI's board approached Amodei after they fired Altman on Friday. They noted the deal was sweetened to allow Amodei to replace Altman as CEO.

The Information reported Amodei declined the offer, adding, "It's not clear whether the merger proposal led to any serious discussion."
...


lulz.
 
From the link:

SAN FRANCISCO, Nov 22 (Reuters) - ChatGPT-maker OpenAI has reached an agreement for Sam Altman to return as CEO days after his ouster, capping frenzied discussions about the future of the startup at the center of an artificial intelligence boom.

The company also agreed to revamp the board of directors that had dismissed him. OpenAI named Bret Taylor, formerly co-CEO of Salesforce, as chair and also appointed Larry Summers, former U.S. Treasury Secretary, to the board.

 
WTF does Larry Summers know about AI?

From the link (fwiw):

Larry Summers is an economist who was Treasury Secretary from 1999 to 2001 and president of Harvard university. He's also been outspoken about AI's threat to jobs and the threats it could pose.



"AI is coming for the cognitive class. Part of the reason you're seeing such hysteria now is that it's the people who write articles and their friends," Summers told Bloomberg in July.

He also forecast "restrictionist and protectionist policies that limit our ability to benefit from these technologies or slow down" the development of AI, in a March interview with GZERO Media,

 

Highlights of the 2023 Executive Order on Artificial Intelligence​

The following is the Nov. 17, 2023, Congressional Research Service report, Highlights of the 2023 Executive Order on Artificial Intelligence for Congress.

From the report​

On October 30, 2023, the Biden Administration released Executive Order (E.O.) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It establishes a government-wide effort to guide responsible artificial intelligence (AI) development and deployment through federal agency leadership, regulation of industry, and engagement with international partners.

The E.O. directs over 50 federal entities to engage in more than 100 specific actions to implement the guidance set forth across eight overarching policy areas.

  • Safety and security. The E.O. promotes the development and implementation of repeatable processes and mechanisms to understand and mitigate risks related to AI adoption, including with respect to biosecurity, cybersecurity, national security, and critical infrastructure.
Read the rest:

 
VC-Verses-9-Problems-with-Generative-AI-Nov-16-1-2.png


 
Back
Top Bottom