Important How to use the ChatGPT Playpen forum room

Thread contains important information that may impact how you use the forum

Welcome to the Precious Metals Bug Forums

Welcome to the PMBug forums - a watering hole for folks interested in gold, silver, precious metals, sound money, investing, market and economic news, central bank monetary policies, politics and more. You can visit the forum page to see the list of forum nodes (categories/rooms) for topics.

Why not register an account and join the discussions? When you register an account and log in, you may enjoy additional benefits including no ads, market data/charts, access to trade/barter with the community and much more. Registering an account is free - you have nothing to lose!

pmbug

Your Host
Administrator
Benefactor
Messages
14,304
Reaction score
4,519
Points
268
Location
Texas
United-States
PMBug has an integration with OpenAI.com's ChatGPT AI via a programming API. This integration is only working in the ChatGPT Playpen forum room. The rest of the PMBug site functions as normal.

Presently, participation in the ChatGPT Playpen forum room is limited to the moderating team and PM Bug members that have an active membership upgrade. The API used to interact with ChatGPT charges a very small, nominal fee based upon the volume of data that is processed (input and output), so it's not really feasible to provide access for free.

To interact with ChatGPT, you can (if you have an upgraded membership) either create a new thread topic, or reply to an existing ChatGPT post using the reply with quote option. ChatGPT will only respond to replies if you quote one of it's posts.

It seems that it doesn't really matter if you quote ChatGPT's entire post or just a few words of it's post. The integration developer informed me that the integration is just passing the text of a post (new thread post or quote+reply) in order to prompt a response from ChatGPT. I assume there must be some sort of session/thread ID involved as well because ChatGPT appears to respond with the context of the full thread discussion.
 
I have made it easier for folks to opt out of seeing any ChatGPT threads. If you don't want to see any of the ChatGPT Playpen content, you can select "Yes" to the "Hide ChatGPT" option on your preferences page:

 
...
Presently, participation in the ChatGPT Playpen forum room is limited to the moderating team and PM Bug members that have an active membership upgrade. ...

The developer of the ChatGPT integration mod released an update for it which I have installed. The update allows more granular control over user group access to ChatGPT.

Additionally, OpenAI has, for the moment, lowered prices for using gtp-3.5-turbo (which is the version we are using). They are planning on phasing out gpt-3.5-turbo in favor of gpt-3.5-turbo-16k:


So, I checked on the balance accrued so far for usage of ChatGPT in the forums and see the total balance so far (over 3 months of usage) is under $4. The cost is far less than I had anticipated, so with the new integration update, I'm able to open up access to the PlayPen to all active members now.

Active members that do *not* have a current account upgrade will be limited to 3 ChatGPT replies per thread for now. Members with upgraded accounts will continue to have unlimited access/replies.
 
If anyone wanted to use Chat but was unable to do so...............now's your chance.
 
Traditional software responds predictably to instructions. “Generative” artificial-intelligence (AI) models, such as that used by ChatGPT, are different: they respond to requests written in everyday language, and can produce surprising results. On the face of it, writing effective prompts for AI is much simpler than, for example, mastering a programming language. But as AI models have become more capable, making the most of the algorithms within these black boxes has become harder. “Prompt engineering”, as this skill is known, has been likened to guiding a dance partner or poking a beast to see how it will respond. What does it involve?

For starters a good prompt should include a clear instruction: compile a given policy proposal’s potential downsides, for example, or write a friendly marketing email. Ideally the prompt should coax the model into complex reasoning: telling it to “think step by step” often sharply improves results. So does breaking instructions down into a logical progression of separate tasks. To prompt a clear explanation of a scientific concept, for example, you might ask an AI to explain it and then to define important terms used in its explanation. This “chain of thought” technique can also reveal a bit about what is going on inside the model.

AI users need to be able to see that detail. Because big models are trained on what one prompt engineer calls “everything from everywhere”, it helps to include authoritative texts in a prompt, direct a model to give particular sources priority or, at the very least, to tell the model to list its sources. Many models offer settings for “temperature”, which, when raised, increase the randomness of results. That can be good for creative tasks like writing fiction but tends to increase the frequency of factual errors.
...

 
Back
Top Bottom