Coda and AI excellence

Excellence is not an act but, a habit

Christiaan Huizer
5 min readJul 31, 2023

--

Well the full quote is this one:

We are what we repeatedly do. Excellence then, is not an act, but a habit.”

People believe it was the great Greek philosopher Aristoteles who said this. He did not, but I assume he could have said something like this. It’s Aristotelian in spirit but not in the letter,” said Jonathan Buttaci

I bring this to mind because Coda creates a context that permits us to excel with AI for two reasons:

  • we can blend structured with unstructured data
  • we can keep track of our efforts by generating prompts in tables and (re)use these prompts. As such we can learn from what we made. In short: we can automate prompting and turn it into a (programmed) habit that gets better over time.

How can we repeat our prompts?

In tables we can store data and with buttons and or formulas we can bring data living outside the table into a table. We need to learn from what we did and in this context we need a prompt table.

We are used to take the AI window as starting point; it opens and the few words invite us to write something. Indeed it works very well.

Today (end of July 2023) our prompts are not automatically stored somewhere and we cannot keep track of them. Maybe over time this changes (I hope so!) but for now we have to work the other way around to get somewhere.

We start writing a prompt in table and we reference this prompt in our AI window by using the the formula editor.

We also want to keeptrack of the outcome. This is something we cannot fully automate, but we can make it easier with a button. Below how it goes. You see the prompt we reference in the AI block and once the result is there, we bring the outcome into the table.

I published a doc you can copy that shows a prompt table logic.

This set up is helpful but not elegant and thus we hope that the Coda team delivers us a native solution.

The AI of our choice

We don’t know what is happening under the AI hat in Coda. Seen the many LLM flavors out there it is likely that in our docs we see the result of various LLMs, maybe even reviewed by an AI before printed on our screens. Coda does not communicate about this in the open, this guess is based on the input of Brian:

Edit on Aug 03, in this video Shishir confirms the view that the company is open to the idea to provide access to various LLM for many reasons, security being one of them.

Over time we may learn that certain LLMs perform better for certain tasks or we want to specialize in manipulating a LLM because … and here you fill out as many reasons you can think of.

It would make sense that in a Coda context we can make these choices. The doc logic promotes a kind of neutrality. Wasn’t the story that in a doc you can bring all the data from external sources via any pack?

Specific packs won’t solve this issue since packs per definition are on top and not thus not native. You cannot link packs to the main AI features in the doc like AI blocks, while this is exactly what we need.

LLM configuration only makes sense when you can select the LLM of your liking.

LLM configuration

On the site Learn Prompting we find an interesting section on LLM settings.

It explains a few parameters, but there are more as you see below. It is a part of my Google AI pack and all LLMs have these and maybe even more.

Large means that the model has a lot of parameters. It doesn’t necessarily mean that they are trained on a lot of data — though the two are correlated (source).

In a context that requires less creativity you want less freedom for the AI, these parameters can be helpful. They help you to tweak the outcome. To get there you need to understand the parameters and a bit of machine learning logic. In this context I’d like to reference this great blog. It explains important AI concept like vectors and embeds in language models.

parameters: {
temperature: 0.2,
maxOutputTokens: 256,
topP: 0.8,
topK: 40

Top p

Top p is a parameter used in Natural Language Processing (NLP) that controls the probability distribution of words that are generated by a language model. It limits the sampling to the most likely words that have a combined probability of being sampled up to a certain threshold. For example, if top p is set to 0.8, the model will only consider the 80% most likely words for sampling.

Top k

Top k is another parameter used in NLP that controls the probability distribution of generated words. It limits the number of words that can be sampled to the k most likely words. For example, if top k is set to 5, the model will only consider the 5 most likely words for sampling.

Temperature

Temperature is a hyperparameter used in NLP that controls the level of randomness in the generated text. It scales the logits (log-odds) of the predicted words before they are converted into probabilities. A higher temperature results in a more diverse set of generated words, while a lower temperature leads to more conservative and predictable outputs.

maxOutPutTokens

maxOutPutTokens is a parameter used in NLP that limits the number of tokens (words or subwords) generated by a language model. It is used to prevent the model from generating overly long sequences that may be nonsensical or irrelevant to the task at hand. This parameter is often set to a fixed length, such as the maximum length of text that the model was trained on.

My name is Christiaan and blog about Coda. Since the summer of 2023 mainly about how to Coda with AI to support organisations dealing with texts and templates. My blogs are for beginners and experienced users. The central theme is that in Coda everything is a list.

I hope you enjoyed this article. If you have questions feel free to reach out. Though this article is for free, my work (including advice) won’t be, but there is always room for a chat to see what can be done. You find my (for free) contributions to the Coda Community and on Twitter.

Coda comes with a set of building blocks ー like pages for infinite depth, tables that talk to each other, and buttons that take action inside or outside your doc ーso anyone can make a doc as powerful as an app (source).

Not to forget: the Coda Community provides great insights for free once you add a sample doc.

--

--

Christiaan Huizer

I write about Coda.io - AI and (HR )planning challenges. You find blogs for beginners and experienced makers. I publish about once per week. Welcome!