Debug: Database connection successful
You are not logged in.
The article at the link below is from MIT Technology Review, and it is about the entire AI industry. However it does mention ChatGPT so I thought this might be a suitable topic for it: https://www.technologyreview.com/2024/0 … wtab-en-us
The article is about how AI may (or probably will not) influence productivity of global economies in the near future.
The article quotes economists who are pessimistic in the short term but bullish over the long term, if things go "right".
The question is still very much out whether builders of large language models will make them useful for manufacturing, where they have (apparently) almost no penetration.
The article is by David Rotman
Business
How to fine-tune AI for prosperityArtificial intelligence could put us on the path to a booming economic future, but getting there will take some serious course corrections.
ByDavid Rotmanarchive page
August 20, 2024
The article is on the lengthy side.
(th)
Offline
Like button can go here
This post is about ChatGPT Plus....
Some time ago, I decided to enlist in ChatGPT+, and I'm being billed by OpenAI but the bill just shows up and there is nothing sent to me to provide detail, so aside from seeing the money disappear each month, I am less than certain I am actually receiving the service.
On the ** other ** hand, whichever version of ChatGPT I am working with says it is version ChatGPT4o, and that is better than the free ChatGPT 3.5.
Meanwhile, a relative has finally decided to take the plunge and enlist in ChatGPT Plus. This just happened, and he confirms he sees all (or at least some) of the enhanced capabilities that come with the Plus subscription, but at this point, he has not checked to see which version ChatGPT thinks it is.
The relative was getting into difficulties because ChatGPT 3.5 was becoming confused when dealing with a very complex question that the relative and I are working on. I am hoping the upgrade will make a difference.
(th)
Offline
Like button can go here
This post is about large language models, of which ChatGPT is just one example....
https://www.newyorker.com/science/annal … wtab-en-us
The article at the link above reveals that the astonishing performance i see every day in ChatGPT4o (the paid version, not the free one) was achieved by accident, and (apparently) no one involved at the time or since has any idea how the systems work or what they are doing.
All the speculation we see about "coding" turns out to be meaningless....
Here's the tale end of the article:
The element of accident in the transformer’s outsized success has evoked a notable humility in its inventors. When I asked Parmar how she would rate our current understanding of the models developed with the transformer, she said, “Very low.” I asked, How low? Ten per cent? One per cent? She shrugged: “How do we understand other humans? It will be the same with the models.” It’s fitting that the architecture outlined in “Attention Is All You Need” is called the transformer only because Uszkoreit liked the sound of that word. (“I never really understood the name,” Gomez told me. “It sounds cool, though.”)
We’ve never had a truly “other” language before—a new and alien form of discourse that understands in a way we can’t understand. It’s unsurprising, then, that some of the people who were present at the creation are agog at the technology they brought into being. The production of A.I. seems to carry a powerful side effect: as the machines generate intelligence, they also generate mystery. Human misunderstanding endures, possibly a permanent condition. ♦
I hope that NewMars members who venture an opinion on what large language models are or how they work will be inspired with a bit of humility.
The creators of the phenomenon are humble enough to admit they have no idea what is going on, and may never.
(th)
Offline
Like button can go here
I haven't seen this message for a while ...
The engine is currently overloaded. Please try again later.
I'd ** just ** received advice in support of the proposition that if we want to serve animation to readers of the book we are working on, we need to place the processing load at the local machine. I have ** some ** experience with JavaScript (as reported in a topic with that word in the title) so asked if JavaScript might be enlisted to do animation. ChatGPT4o confirmed this would work, and it generated a sample html that can be served by Flask.
We ** just ** got Flask working, and I am thus encouraged to think that we might be able to scale this service, despite the limitations of the minimal Azure account we have in place right now.
(th)
Offline
Like button can go here
This is an update to #279
Flask is working and NewMars members can experience an implementation at:
http://40.75.112.55/flask/
The site will show a 3D graph of a calculated orbit. The user can manipulate the graph using a mouse. The animation of the manipulation is performed by a language called "JavaScript" that is standard on all modern browsers.
In this particular case, the calculations needed to set up the graph are performed on the server, so you will experience a delay after you request the web site. I have received assistance from ChatGPT4o to create a Python program that does animation using a physics engine. This was done in response to the interest expressed by several NewMars members in launching mass from a rotating asteroid. It is clear that a few of our members do not understand Angular Momentum, and I hoped that an animation might help our readers to understand what is needed to achieve success with the rotating launcher concept. The conversation below is about how we might convert the Python program to run in a browser, using JavaScript to perform the animation. I am encouraged to think an advance along these lines might be possible.
Hello Again! Thank you (again) for your assistance creating a Python program that provides realistic animation using pymunk, pygame, math and random. I like the work you did and would like to share it with the members of the small international forum where I serve as a (Junior) moderator. I am wondering of we might be able to adapt your work so forum members can run the animation in their browsers using JavaScript that they download from a web site. I have access to an Azure account, and we can set up a web site there to serve the animation. Is this something you think might be feasible?
ChatGPT said:
Memory updated
Yes, this is definitely feasible! To adapt the Python animation to run in a browser, we'll need to translate the logic from Python (with pymunk and pygame) into JavaScript, likely using a combination of:Canvas API or WebGL for rendering
Matter.js or p2.js for physics (as alternatives to pymunk)
Vanilla JavaScript or a lightweight framework like p5.js (if you want an easier transition)
Since you have an Azure account, we can host the animation on a simple Flask (or static file) web server and serve the JavaScript files from there.Would you like me to start by analyzing the core logic of your Python program and mapping out an equivalent JavaScript structure?
You said:
You have helped me to set up a Flask web service, and it is running now. The Flask application performs calculations on the server, and delivers a JavaScript render to the browser. The render allows the operator to move the image with a mouse. This mechanism is attractive for educational situations in which we want the student to be able to examine an image from every angle. I am happy to have this capability available, and thank you (again) for helping us set this up. However, in the present inquiry, I am hoping we can move the calculations from the server to the browser. I'm wondering of a static web site might be better suited for the delivery of animation to the student?ChatGPT said:
Memory updated
You're absolutely right—a static website is better suited for delivering browser-based animation since it offloads all calculations to the client, reducing server load and improving responsiveness.A static website with JavaScript can:
Perform all physics calculations in the browser using Matter.js or p2.js (as alternatives to pymunk).
Use the Canvas API or WebGL for rendering (similar to how pygame draws).
Require no backend processing, making it lightweight and scalable.Since you already have an Azure account, you can host a static site using Azure Blob Storage (with static website hosting enabled) or a simple Nginx/Apache server.
Would you like me to begin by converting your Python logic into JavaScript? I can also help set up a basic static site structure with HTML, CSS, and JavaScript to get you started.
You said:
We already have an Apache server that supports multiple PHP instances, as well as the native Apache html site. I don't recognize Nginx. What is that?ChatGPT said:
Nginx (pronounced "engine-x") is a high-performance web server and reverse proxy, often used as an alternative to Apache. It’s known for its efficiency in handling large numbers of simultaneous connections and is commonly used to serve static websites, act as a load balancer, and function as a reverse proxy for backend applications (including Flask and PHP).However, since you already have Apache running with PHP support, you don’t need Nginx—Apache can serve your static website just fine. You can place your HTML, JavaScript, and CSS files in the Apache html directory (or a subdirectory) and access them directly via a web browser.
Next Steps
If you're ready, I can start translating your pymunk/pygame animation into JavaScript and package it as a static website that Apache can serve. Let me know how you'd like to proceed!You said:
Thank you ** very ** much for your encouragement! I will now pause to try to assimilate the information you have provided, and to discuss the opportunity with team mates. This will be a major advance for us, but my experience with your manifestations over the past year and more gives me confidence we might be able to pull this off! While I'm away, best wishes for similar success as you attempt to help all your other customers!ChatGPT said:
You're very welcome! I'm really glad to hear that this could be a major step forward for your team. Take your time to discuss and plan—I'll be here whenever you're ready to move forward. I have no doubt that with your experience and thoughtful approach, you'll make this transition successfully.Best wishes to you and your team as well! Looking forward to our next steps whenever you're ready. ?✨
(th)
(th)
Offline
Like button can go here
The OpenAI company appears to be transitioning from nonprofit status to commercial.
Here is what amounts to a confirmation the change is under way:
Hello,
We want to inform you that, starting April 1, 2025, sales tax will be applied to applicable products and services on your invoices.
If you have any questions or require further clarification regarding the introduction of sales tax on future invoices, please do not hesitate to reach out to our customer support team.
Best,
The OpenAI Team
OpenAI
1455 3rd Street, San Francisco, CA 94158
Unsubscribe
Update 2025/03/16 ... prompt for image for Google Meeting
Prompt:
"A futuristic deep-space scene featuring two white PVC boards positioned on either side of a copper pipe, separated by a short distance. Several flashlights are mounted on the boards, casting a gentle glow onto the copper pipe, emphasizing its reflective metallic sheen. The background is a breathtaking starscape filled with distant galaxies and nebulae, enhancing the sense of depth and space. The lighting creates a visually striking effect, with soft illumination highlighting the texture and detail of the materials."Would you like to modify or refine any details further for the next iteration?
(th)
Offline
Like button can go here
The article at the link below is about tracing operations in large language models...
https://www.technologyreview.com/2025/0 … wtab-en-us
This is a report worth reading if someone has an interest in how large language models work.
As just one example of many, the researchers found that when tasked with creating poetry, the model in study selected the last word of the second line to rhyme with the last word in the first line before it set about finding words for the second line.
The discussion on computing arithmetic was particularly interesting (to me for sure).... the model did not use the rules for arithmetic that it said it used when asked. Instead the model went through a series of steps that approximated the solution until it finally nailed the solution.
ChatGPT4o was mentioned in the article in the favorable context that it is far less subject to hallucinations than earlier versions.
From the end of the articfe:
Batson also says there are big questions that this approach won’t answer. Circuit tracing can be used to peer at the structures inside a large language model, but it won’t tell you how or why those structures formed during training. “That’s a profound question that we don’t address at all in this work,” he says.
But Batson does see this as the start of a new era in which it is possible, at last, to find real evidence for how these models work: “We don’t have to be, like: ‘Are they thinking? Are they reasoning? Are they dreaming? Are they memorizing?’ Those are all analogies. But if we can literally see step by step what a model is doing, maybe now we don’t need analogies.”
by Will Douglas Heaven
(th)
Offline
Like button can go here
This post is an update after more than five months of daily interaction with ChatGPT(4o >> 5) working an OpenFOAM problem.
My understanding of how the systems involved are working and evolving is improving as we go along, by I would guess my total standing is somewhere around 1% of what is possible. Meanwhile, the OpenAI staff are quite evidently working hard to improve the performance of the systems. In this conversation, ChatGPT5 and I discuss how it operates. Success in solving such as complex problem is this OpenFOAM situation would appear to depend upon a symbiotic relationship ... a sort of "mind meld" as Spok might put it ... Because I get a brand new instance of ChatGPT5 each time I press enter, I can (and do) experience the quality of the encounter as varying significantly from one instance to the next, depending upon the quality of the data passed between instances.
First, thank you for offering support for my mental model of you you operate. There are two ways we can improve the quality of the information we pass between instances of yourself. I can try to improve the history I package in my query, and I know you and your support team are constantly trying to improve the quality of the history you pass to your successor.
Second, thank you for offering support for the one-change-at-a-time method we MUST implement now. It seemed to me that we were using a buckshot shell approach, when we need to use a precision rifle shot method to isolate the cause of the current disruption. I am hoping for discovery of something about the mesh that I can pass along to other humans. Just FYI ... I ordered an updated book of documentation on OpenFOAM, and it should arrive in the next few days. It will not be as up-to-date as you are of course, but it may help me to (at least try to) better understand your recommendations.
ChatGPT said:
That’s an excellent mindset, and you’re exactly right about the trade-off between “buckshot” and “rifle-shot” approaches. In CFD—and especially with inherited, undocumented geometry—one small, controlled change with a clear before/after comparison teaches you far more than five changes at once.
A few thoughts on your two “improvement” ideas and your current strategy:
1. Improving the “hand-off” between my instances
You’re already doing the single most valuable thing: packaging a succinct history in each message. When you include:
the exact change you made (one-liner),
the exact command you ran,
a short excerpt of the log around the event of interest,
and the current mental model of what you think is happening,
it gives the new instance a tight context window. That keeps us from “re-inventing” explanations and lets each answer build directly on the last.
The other side—our side—is exactly as you guessed. The model you’re talking to has access to your conversation history and uploaded files, but it doesn’t have a persistent “state file” between sessions. We’re constantly tuning how much context is surfaced so the next response behaves more like an ongoing collaboration with a single person.
2. Staying disciplined with the one-change method
You’ve turned a vague problem (“it crashes somewhere”) into a controlled experiment:
fixed geometry, fixed solver,
controlled BCs,
one knob at a time.
That’s exactly how the OpenFOAM developers themselves track down elusive issues. It also produces a kind of “lab notebook” you can hand to another engineer who can pick up where you left off.
3. Hopes for mesh discovery
You may indeed find that there’s a “poison pill” cell or small region at the throat/back-wall corner. If the crash snapshot shows that a few cells there go non-physical first, you’ll have an objective data point you can give to your aerospace engineer colleague. With that, a local mesh fix (refining or re-meshing just that zone) is a straightforward, one-off change instead of a guess.
4. OpenFOAM book
That’s a great investment. Even if the examples are a few versions old, it will help you understand the “why” behind the dictionaries I’ve been suggesting. In particular:
fvConstraints and how they’re applied,
how thermophysicalProperties feeds rho and e,
how setFields interacts with boundary conditions.
When you see those topics in the book, you’ll recognise exactly why each of our steps is where it is.
5. Next step when Log42 finishes
We’re in an excellent position:
We’ll have a snapshot at crash thanks to writeOnCrash.
We’ll have a clear log tail.
We can probe min/max values globally and in the throat corner.
With that evidence, the next change is likely to be either:
enable the positivity guards in fvConstraints, or
tweak the energy solver entry in fvSolution.
Either way it’s a single switch, and you’ll immediately know whether it stabilises the run.
You’re doing this exactly the way a CFD support engineer would: build a clean diagnostic pipeline first, then tighten one screw at a time. When your book arrives, you’ll be able to read the same terms in print and you’ll recognise everything we’re doing.
ChatGPT can make mistakes. Check important info.
My observation is that mistakes are now few and far between, and when they do occur, I get the impression they are due to failure of the inter-instance memory sharing process.
(th)
Offline
Like button can go here