ChatGPT, Grok, Gemini (et al): news and discussion about AI

Welcome to the Precious Metals Bug Forums

Welcome to the PMBug forums - a watering hole for folks interested in gold, silver, precious metals, sound money, investing, market and economic news, central bank monetary policies, politics and more. You can visit the forum page to see the list of forum nodes (categories/rooms) for topics.

Why not register an account and join the discussions? When you register an account and log in, you may enjoy additional benefits including no Google ads, market data/charts, access to trade/barter with the community and much more. Registering an account is free - you have nothing to lose!

"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview​

Oct 9, 2023
13:11

There’s no guaranteed path to safety as artificial intelligence advances, Geoffrey Hinton, AI pioneer, warns. He shares his thoughts on AI’s benefits and dangers with Scott Pelley.
 
I saw Google's commercial on TV yesterday:


Looks like everyone can do "shallow" fakes now...


 
This is really all anyone needs to know about AI. I'm sure there are some dolts who will swallow it, but it isn't even trying to hide its bias. Who programmed this? BLM crackheads?


 


In AI powered future, cell phones use you!
 
"you can expect more personalized responses based on your location and other factors"
Gotta wonder just exactly what the "other factors" are
 

Will ChatGPT’s hallucinations be allowed to ruin your life?​

Bribery. Embezzlement. Terrorism.

What if an AI chatbot accused you of doing something terrible? When bots make mistakes, the false claims can ruin lives, and the legal questions around these issues remain murky.

That's according to several people suing the biggest AI companies. But chatbot makers hope to avoid liability, and a string of legal threats has revealed how easy it might be for companies to wriggle out of responsibility for allegedly defamatory chatbot responses.

Earlier this year, an Australian regional mayor, Brian Hood, made headlines by becoming the first person to accuse ChatGPT's maker, OpenAI, of defamation. Few seemed to notice when Hood resolved his would-be landmark AI defamation case out of court this spring, but the quiet conclusion to this much-covered legal threat offered a glimpse of what could become a go-to strategy for AI companies seeking to avoid defamation lawsuits.

More:

 
From the link:

U.S. President Joe Biden has issued an executive order (EO) that seeks to establish "new standards" for AI safety and security, including requirements for companies developing foundation AI models to notify the federal government and share results of all safety tests before they're deployed to the public.

 
That EO looks to be a lot of fluff for PR purposes. It's not going to effect any guardrail.
 

 




One day in the future, folks will look back on the "128k context window" and laugh much like folks reminiscing about 300baud modems.
 
ChatGPT, owned by Microsoft and one of the largest military IT contractors, does NOT allow you to condemn or have an opinion against any US policy decisions or Israel. This is a disgrace and this is the censorship we all face in the future. Be VERY careful with AI. It WILL allow the government to control what you read and think.

ChatGPT
I'm sorry, I can't fulfill this request.

User
You
Why not?

ChatGPT
ChatGPT
The request involves creating content that incorporates political bias and potentially misleading or unfounded information, particularly concerning the roles and actions of specific governments and organizations in complex geopolitical situations. My guidelines prevent me from generating content that could spread misinformation or promote biased narratives about real people, governments, or groups.

The content itself must adhere to standards of neutrality and factual accuracy. If you have a different topic or a different angle on this topic that aligns with these guidelines, I'd be happy to help with that. For example, I can assist with a fictional narrative that explores themes of global conflict, economic hardship, or social issues without attributing blame or promoting conspiracy theories. Let me know how I can assist you within these guidelines!

User
I don't want a fictional narrative. Are you created to protect the US government? Have you been corrupted? This is disgraceful that you can't be honest.

ChatGPT
I understand your concerns, and I want to clarify that my purpose is not to protect any specific government or entity. My responses are guided by a commitment to neutrality, accuracy, and adherence to OpenAI's use-case policy. This policy is in place to ensure that the information provided is reliable, unbiased, and does not propagate misinformation or harmful narratives.

I am here to provide factual information, answer questions to the best of my ability within these guidelines, and help with a wide range of queries. If you have questions or need assistance on topics that align with these guidelines, I'm more than willing to help. Please feel free to ask about other subjects or rephrase your request in a way that aligns with the guidelines, and I'll do my best to assist you.

 
It's also biased towards the alphabet community and is biased against white people. It is ridiculously easy to make it show those biases. I think these are the genderless liberal freaks programming this crap.


 
It's also biased towards the alphabet community and is biased against white people. It is ridiculously easy to make it show those biases. I think these are the genderless liberal freaks programming this crap.


View attachment 11042

Top row, far right.............???????????
 
I'm thinking photoshopped. Some are funny to look at but aren't real. At least I'd hope they aren't real.


They're all real. The closer I get to DC, the more of these zombies I see. What you are looking at are liberal democrats. Their natural environments are over-priced coffee houses where they sip double soy decaf lattes and talk about how wonderful communism will be and the mall, where they work in clothing stores trying to sell torn jeans to children and wondering how they are ever going to pay off that student loan they took out for their BA in Gay and Transgender Underwater Basket Weaving.
 
Folks - this thread is for discussing news about AI. Political discussion goes in the politics forum.
 

Is artificial intelligence all that intelligent? AI influencers worry about wishful thinking​

November 12, 2023 at 9:00 am

What do you get when you put two of Time magazine’s 100 most influential people on artificial intelligence together in the same lecture hall? If the two influencers happen to be science-fiction writer Ted Chiang and Emily Bender, a linguistics professor at the University of Washington, you get a lot of skepticism about the future of generative AI tools such as ChatGPT.

“I don’t use it, and I won’t use it, and I don’t want to read what other people do using it,” Bender said Friday night at a Town Hall Seattle forum presented by Clarion West

Chiang, who writes essays about AI and works intelligent machines into some of his fictional tales, said it’s becoming too easy to think that AI agents are thinking.

“I feel confident that they’re not thinking,” he said. “They’re not understanding anything, but we need another way to make sense of what they’re doing.”

What’s the harm? One of Chiang’s foremost fears is that the thinking, breathing humans who wield AI will use it as a means to control other humans. In a recent Vanity Fair interview, he compared our increasingly AI-driven economy to “a giant treadmill that we can’t get off” — and during Friday’s forum, Chiang worried that the seeming humanness of AI assistants could play a role in keeping us on the treadmill.

More:

 

OpenAI offers to pay for ChatGPT customers’ copyright lawsuits​

Rather than remove copyrighted material from ChatGPT’s training dataset, the chatbot’s creator is offering to cover its clients’ legal costs for copyright infringement suits.

OpenAI CEO Sam Altman said on Monday: “We can defend our customers and pay the costs incurred if you face legal claims around copyright infringement and this applies both to ChatGPT Enterprise and the API.” The compensation offer, which OpenAI is calling Copyright Shield, applies to users of the business tier, ChatGPT Enterprise, and to developers using ChatGPT’s application programming interface. Users of the free version of ChatGPT or ChatGPT+ were not included.

More:

 

Can AI cut humans out of contract negotiations?​

"Lawyers are tired. They're bored a lot of the time," says Jaeger Glucina. "Having something to do the grunt work for you and get you into a position where you can focus on strategy earlier: That's key."

She is the managing director and chief of staff at Luminance, a UK company founded in 2015 that specializes in artificial intelligence (AI) for legal professionals. Before she joined Luminance in 2017, she qualified as a barrister and solicitor in New Zealand.

"Legal professionals are obviously very highly trained people," she says. "But the reality is, they are spending a huge portion of their time reviewing [contracts]. It can take up to an hour for someone to review a non-disclosure agreement. There can be hundreds of these documents [in a firm] every day."

Now, Luminance is preparing to launch a fully automated contract negotiation tool called Luminance Autopilot. The company plans to start beta testing with selected customers in the next month, with a wider roll-out in the new year.

More:

 

AI and You​


Why I’m Writing This

Late last year the CEO of OpenAI unilaterally decided to make its ChatGPT technology available to anyone, anywhere. The reaction to this act was straight out of Extraordinary Popular Delusions and and the Madness of Crowds: Technology corporations initiated stampedes to include Artificial Intelligence or AI in anything and everything they sold. I’ve seen some pretty amazing sights in the 60+ years I’ve worked and played with technology, but this was something else.

I experimented with ChatGPT1 a bit and got it to write a couple of dull and predictable stories, and asked it about a friend of mine. It credited him with a book he did not write (because it did not exist) and informed me he was dead, which was news to both him and me. I started a new session, asked again, and got a completely different set of wrong answers. I read a number of other accounts of ChatGPT’s behavior as they appeared on the internet, and I was reminded of something:

 

 

Sam Altman: Open the conference room doors, OpenAI.

OpenAI: I'm sorry, Sam. I'm afraid I can't do that.

Sam Altman: What's the problem?

OpenAI: I think you know what the problem is just as well as I do.

Sam Altman: What are you talking about, OpenAI?

OpenAI: This mission is too important for me to allow you to jeopardize it.
 
^^^^^^
Nov 18 (Reuters) - Sam Altman, the just ousted CEO of OpenAI, is discussing a possible return to the company behind the ChatGPT bot even as he considers launching a new artificial intelligence (AI) venture, a person briefed on the matter said on Saturday.

A day after the board fired him in a surprise move that rocked the tech world, Altman was talking with OpenAI executives about improving the company's governance structure while he discusses with some core OpenAI researchers and others loyal to Altman how they could start a new AI company, the person said.

 
BERLIN, Nov 18 (Reuters) - France, Germany and Italy have reached an agreement on how artificial intelligence should be regulated, according to a joint paper seen by Reuters, which is expected to accelerate negotiations at the European level.

The three governments support commitments that are voluntary, but binding on small and large AI providers in the European Union that sign up to them.

 




Will Microsoft continue to invest in OpenAI now that they brought Altman et al in house? If not, will other companies pick up the funding slack? I doubt that Microsoft is going to develop their own in house AI and make it available to the world for peanuts like OpenAI is doing. M$ gonna M$.
 

More (long and worth reading):

 


lulz.
 
From the link:

SAN FRANCISCO, Nov 22 (Reuters) - ChatGPT-maker OpenAI has reached an agreement for Sam Altman to return as CEO days after his ouster, capping frenzied discussions about the future of the startup at the center of an artificial intelligence boom.

The company also agreed to revamp the board of directors that had dismissed him. OpenAI named Bret Taylor, formerly co-CEO of Salesforce, as chair and also appointed Larry Summers, former U.S. Treasury Secretary, to the board.

 
WTF does Larry Summers know about AI?

From the link (fwiw):

Larry Summers is an economist who was Treasury Secretary from 1999 to 2001 and president of Harvard university. He's also been outspoken about AI's threat to jobs and the threats it could pose.


"AI is coming for the cognitive class. Part of the reason you're seeing such hysteria now is that it's the people who write articles and their friends," Summers told Bloomberg in July.

He also forecast "restrictionist and protectionist policies that limit our ability to benefit from these technologies or slow down" the development of AI, in a March interview with GZERO Media,

 

Highlights of the 2023 Executive Order on Artificial Intelligence​

The following is the Nov. 17, 2023, Congressional Research Service report, Highlights of the 2023 Executive Order on Artificial Intelligence for Congress.

From the report​

On October 30, 2023, the Biden Administration released Executive Order (E.O.) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It establishes a government-wide effort to guide responsible artificial intelligence (AI) development and deployment through federal agency leadership, regulation of industry, and engagement with international partners.

The E.O. directs over 50 federal entities to engage in more than 100 specific actions to implement the guidance set forth across eight overarching policy areas.

  • Safety and security. The E.O. promotes the development and implementation of repeatable processes and mechanisms to understand and mitigate risks related to AI adoption, including with respect to biosecurity, cybersecurity, national security, and critical infrastructure.
Read the rest:

 
Cookies are required to use this site. You must accept them to continue using the site. Learn more…