Use the OCR Pack to extract PDF files
Steps to read out your large files in Coda
I did not create this pack nor am I affiliated with it.
I write about the pack because I believe it is a useful pack.
I contributed a tiny bit to the pack development by sharing general insights.
Intro
I felt the need for this pack while working with organisaties in which employees processed large PDF files, often scanned documents. First I tried the free packs on reading PDF files and hit a limit: too much data and often it missed text parts. For the latter OCR is required, for the first a smart set up.
Rickard recognized these pain points and solved it in two steps: first server side and second in case a button cannot handle the text size, he created an intermediary step to still have the desired outcome. I’ll explain both below.
The outcome is that you can print the content of the PDF in a field and from there you can use the functions living in your Coda toolbox like Filter()
, RegexExtract()
and all the rest. You can also use Coda AI.
General Set up
You go to packs and type OCR below Insert — I always have to look for the search bar to get the pack I want.
You click on it and you install the pack for free.
Get your token
You see here already the link get your token
https://coda.io/form/Get-your-Token_dtBMsL2QN1P
It opens a coda form and you click on submit, it is user based and thus Richard has your email. You need this token to activate the pack and this token is related to credit supply as well.
You also can get the token when you want to scan a file without having the token.
Now we have our token and we can continue with the functions.
The functions
On the right side you see this.
You have the choice between two functions: Scan()
and / or ReadTextFile()
. The other functions are about the credits, more about that later. The Scan()
function directly reads the file and pushes the content into the column.
However it has a limit as you can read in the details:
I’ve noticed the 85 kB limit is easily exceeded, so using a button instead of a column function as the default setup is more practical. A button prevents automatic OCR credit consumption, which happens with column functions, as it only executes when clicked. This approach also helps avoid excessive processing demands and circumvents a caching issue within Coda.
That brings me to the following set up. You create a table with a file column (one PDF per row) and a button . This button generates a text file.
Next, we streamline the process by applying a function to read the text file directly into a canvas column, maintaining the table’s organized structure.
This approach permits to store the content of a file into a canvas column without the use of button. Buttons do not execute after a certain size limit is passed. Now we have all the data available and we can start using it once we have enough credits.
The credits
The OCR pack is a free pack that operates on a credit system. You receive 50 free credits (equivalent to 50 PDF pages), and any additional credits are purchased directly from the developer, Rickard, using your credit card. You only pay for what you use.
One of the aspects I appreciate most about this pack is that the price is unrelated to the makers in your workspace. Additionally, the pricing is very affordable at just $5 for 1000 credits (1000 pages). It’s great value for money.
The consumption of your credits you follow via the table Requests, which is the first option in the above screenshot. This table permits to keep track of your consumption. Below some results after testing and paying $5.
As said before, due to the caching mechanism within Coda docs, an action might be counted twice. However, the cost impact is negligible, so there’s no need for concern.
More details here. The credits don’t expire and the pack is free, again it is a very generous set up. Below the page confirming my purchase.
How to make use of the pack
As suggested, you create a table with a file column (one pdf per row) and a button that reads the generated TXT file. With the data in Coda we can use the search bar top left to find the files that contain specific data. Once we have that we can apply all sorts of functions.
Example
I have an doc in which we use a combination of Coda AI and this pack to check for the addresses living in the documents (contracts) and we ask who signed the contracts at what dates. Here the problematic part is Coda AI. Even with an unlimited AI pack, it does not work well so I am afraid I have to develop a pack to bring in my Gemini 1.5 AI bot to do the job instead. The OCR pack does a wonderful job.
The Coda — Brain over time might solve this issue partly. I am prudent because the Snowflake AI is likely not going to be active on the surface of the docs, but works in the back ground. I have to check but it may require that we do no store PDF files in Coda but in a Google Drive or in a Dropbox and allow the Snowflake AI to read the data. That would feel like a detour. To be seen. As it is, Coda AI is far from good enough as it is.
Rickard created a demo doc you can copy and use to check the OCR logic.
I hope you enjoyed this article. If you have questions feel free to reach out. My name is Christiaan and blog about Coda. Though this article is for free, my work (including advice) won’t be, but there is always room for a chat to see what can be done. You find my (for free) contributions in the Coda Community and on Twitter. the Coda Community provides great insights for free once you add a sample doc.
All the AI features we are starting to see appear — lower prices, higher speeds, multimodal capability, voice, large context windows, agentic behavior — are about making AI more present and more naturally connected to human systems and processes. If an AI that seems to reason like a human being can see and interact and plan like a human being, then it can have influence in the human world. This is where AI labs are leading us: to a near future of AI as coworker, friend, and ubiquitous presence. I don’t think anyone, including OpenAI, has a full sense of all of the implications of this shift, and what it will mean for all of us