Two business books a day vs GPT

What have I learned from turbo-using GPT for 6 months? Ask everyone the right questions

This will sound unlikely, but I think there are few who use GPT in so many weird ways and in as much volume as I do. According to OpenAI’s analytics on my account, I’ve queried GPT an average of 19,000 (~650 queries/day) times a month since November 2022 – and even before then (before ChatGPT), I was querying playground several thousand times a month.

And that’s excluding the 1000s of inadvertant GPT queries made on my behalf through copilot.

Now to be fair, the vast majority of those queries (on ChaptGPT, Playground, via Copilot, and via the API) were related to coding, where I use the tools as sort of a real-time debugger, code advisor, ideation tool, documentation, and just general junior programmer, all through a process of near constant copy and paste (Copilot is radically inadequate relative to my customized prompting).

This immediately raises the quality vs quantity question.  For my purposes, both work. Great questions are like levers in their ability to unlock insight, but do they achieve more total growth than 10s of thousands (as in my case) of normal questions? I think there’s a way to achieve a high-quality outcomes using this approach of flooding GPT with iterative questions, favors, and commands. After all, children ask an equally absurd number of questions (about 400/day at age 4), to their considerable benefit.

But never are 4 year old’s questions answered at such length.

One consequence for me of my rat-a-tat querying style has been the vast number of GPT-generated content that I have soaked in. I’m probably “fed” about 130,000 words a day (650 queries per day = 650 responses x 200 words/response average – my guestimate).

That’s easily two business books worth of words. A day.

*     *     *

Would I be better off reading two business books a day than cramming 130,000 words into my eyes? 

Here’s the yes argument:

“Yes, because

  • even the schlockiest and most poorly written business books have the advantage of depth – it stays on topic for an extended period of time.
  • Time-extended humman effort was exerted on it. New ideas come out of that process which I highly doubt that AI will ever be capable of.
  • Of course, you might hear an idea that’s new to you, but it didn’t create it.
  • Furthermore, though this applies only to the few, well-written business books (eg Ogilvy on Advertising), the writing style itself holds intense aesthetic appeal of a kind may never be attainable by a language model.”

Here’s the no argument:

“No, because

  • I cannot direct the words that come out of a business book. As I read it, I cannot ask the book a question about the book. This is something I can now do with GPT and a sort of GPT-enhancement tool called Langchain.
  • Not only can’t I affect what comes next, I can’t affect the manner in which it is said; the book is only ever written in the style of its author and this may become tiresome. Or it may be inappropriate to my mood or question.
  • Also, when I read a book written by another human, I don’t have the sense of partipation that I get from prompting GPT.”

Of course I asked GPT what it thought about these two arguments. I won’t reprint that here – if you find your own answer, I’d recommend you use this prompting approach.

*     *     *

Yesterday I listened to an Akimbo podcast – the episode’s entire transcript written by AI and “read” by an AI voice that sounds exactly like Seth Godin.

Well, that’s not entirely true – in fact, that’s outrageously false.

Seth wrote the prompts then read GPT’s answer directly.

The prompts gave no style, substance, or tone guidance, so they yielded typical GPT material: expansive, slightly perky, tirelessly-grammatically-correct, well-informed, and formulaically neutral to the point of sounding indecisive.

But the prompts created the answers; thus the prompts created the episode. Thus, Seth Godin created the episode, just with a different tool than before.

And the answers weren’t bad. I won’t track them down, because I got them myself from ChatGPT. (Sidebar: people should do this more).

My question was almost identical to the one Seth asked:

What are 7 ways in which large language models could be detrimental to human society?

and the answers varied only slightly:

  1. Reinforcing existing biases: Large language models are trained on vast amounts of data, much of which reflects the biases and prejudices of human society. As a result, these models may perpetuate and reinforce harmful biases in areas such as race, gender, and socio-economic status.
  2. Spreading misinformation: Large language models are capable of generating text that is highly convincing and sophisticated, which could be used to spread misinformation and disinformation on a massive scale.
  3. Undermining privacy: Large language models require access to vast amounts of data in order to function effectively. This data may include sensitive information about individuals and could be used for surveillance or other invasive purposes.
  4. Displacing human jobs: Large language models have the potential to automate many tasks that are currently performed by humans, which could lead to significant job displacement and economic disruption.
  5. Exacerbating inequality: The benefits of large language models may not be evenly distributed, and some individuals and communities may be left behind or even harmed by their widespread use.
  6. Contributing to environmental degradation: Large language models require significant amounts of energy to train and operate, which could contribute to the carbon footprint and environmental impact of the technology sector.
  7. Decreasing human creativity: As large language models become more advanced, there is a risk that they may displace human creativity and originality in areas such as writing, art, and music, leading to a loss of human culture and expression.

Excellent and thought-provoking answers, worthy of inclusion in an Akimbo podcast. 

And there’s the bigger point: it’s not the answers that matter, it’s the question. I don’t mean just with language models, but with every other form of intelligence, namely yourself, other humands, and for some, God or Gods, which you may or may not think of as comprised by yourself or collective unsconscious. Whomever you ask, your ability to compose questions is, if not everything, pretty dang important.

With possible present-bias, the AI crowd has appropriated this ancient skill and rebranded it “prompt engineering”. The Akimbo episode above is a prompt engineering showcase.

PS. Want to try it yourself? Copy this essay into ChatGPT, then below it paste any of the 31 prompts I created on this page: