I was intrigued by the release of ChatGPT for research last week. I set out to explore by putting a few of my questions.
- Why are large language models useful?
- Is nothing something?
- When you light a lamp where did the darkness go?
- What came first chicken or the egg?
- What is consciousness?
- how do you measure funny ness ?
- What is the usefulness of Fourier transforms?
- Is time real?
- What exists beyond the boundary of the universe?
I chose the questions from a mix of technical, philosophical & physics areas. The answers felt to be useful from a perspective of getting to relevance (by reducing the filtering). At the same time i was a little worried about getting anchored to a narrow focus (what am i not seeing?) I would still consider it a useful tool.
For example the answer to the question on the boundary of the universe and the usefulness of fourier transforms felt good enough.
To the question on Is something nothing? The response included a summary expressing two viewpoints one from a philosophy and another from physics.
When posed questions that were centered on physics/science the response was a summary that felt good enough.