Transcribathon II: Academic Letters About Astronomy And A Lot of Other Stuff

Archives hide a lot of stories when they are tucked safely away in boxes and shelves. We at the Museum of University History felt much closer to revealing all these stories because of a fruitful cooperation with The Special Collections and Digitisation Team. They had organised the archive of the old Astronomical Observatory at the University of Oslo and digitised the collection of academic correspondence of the first Professor of Astronomy, Christopher Hansteen (1784-1873). All of them are published on the platform Alvin (Alvin – Observatoriesamlingene). This obviously makes the contents of the letters much more available, both for us at the museum, for the university, other stakeholders in Norway and globally. But one barrier is still there: The handwriting. It takes a lot of effort to get used to the chicken scrawl of the different men Hansteen corresponded with.

An initiative from Annika Rockenberger, leader for digital research methods in the Humanities and Social Sciences at the University Library, brought a solution to this challenge. She suggested that she could arrange a “transcribathon” on selected parts of the collection. We had no idea what she was talking about, but she explained the content of a transcribathon and how to get there: In short, we select letters we would like to have transcribed, Transkribus, a web-based tool for handwritten text recognition, transcribes them, and the next step is the transcribathon, where we invite people to correct the transcribed letters. One great outcome is that we can publish transcribed letters, and another great outcome is that other interested people and we will learn how to use Transkribus.

Preparing The Incompetent

It turned out that one thing is to know that Trancribus exists out there, doing its magic tricks, and another thing is to actually understand how it works and to understand what we, or humans, have to do before we have a transcribed letter where a user can find information using a search tool. This was approximately the process:  

  1. Annika explains what is gained with a transcribathon. We are enthusiastic and eager to do this.
  2. Annika explains (very patiently) how Transkribus works
  3. We try to use Transkribus
  4. Annika explains (very patiently) how Transkribus works, once again
  5. We select letters for the transcribathon, by this logic: Letters from people in Germany that we knew were influential for Hansteen, and letters from Swedish colleagues. It was important to have text to work with for those of us who aren’t fluent in German
  6. The letters were harvested from Alvin by Line Nybakk Akerholt at the library and turned into PDFs
  7. The workshop was a public event, organised by the library. We invited people to join from our different networks: People interested in the history of science, the staff in charge of the educational program at the university and archivists at the university. Our colleagues at the library invited people in their networks.

A Good Day To Transcribe

On Thursday, October 23rd, we turned up eager to work at The Humanities and Social Sciences Library. The event was more popular than we had dared to hope, and we were approximately 20 voluntary hackathonians. First, Annika explained (very patiently) how Transkribus works. Then we sorted out the German-speaking participants, so they could work together. The two remaining tables got one Swedish letter-writer per table. We soon found out why Annika made sure that we sat together with the other people working on the collection. The tables worked as colloquium groups from the first minute until we had to stop at the end of the day.

What Did We Learn From This Experience?

  • People who used to old letters as archive material love to engage with the stories that unfold. But it makes you happy to work together with other people absorbed with letters from the same person! In a reading room, you have no one to tell all the stories to, but in a transcribathon, we could share what we found with people who wanted to listen! The men writing the letters are sharing details about sickness, worries, slander and grief in between the facts or questions that made them send a letter. It takes about 5 minutes to feel a connection with the person who held the pen. 
  • We found out that we should have checked how easy it was to understand the handwriting before we started to work with the collections. The poor people at the German table got the letters of an instrument maker working with Gauss in Göttingen. We hoped the letters would give us loads of new insights into how Hansteen established his magnetic observatory, but with difficult handwriting and a very specific terminology, the task became difficult, and people were understandably frustrated.
  • We started to realise what Annika had tried to teach us – that Transkribus doesn’t think like a human. This was obvious where the letter writer had crossed out some words and written something else between the lines.  
  • We also understood that even if Transkribus feels like magic, and we just correct the transcribed letters, it is still a lot of work, in more than one way.
  • Now that we know how to correct the transcribed letters, we can work with the material on our own, without a mentor at hand. 

The transcribathon was more popular than we had anticipated, and people asked us if we could arrange this again. This really made us happy. When you work with some very specialised material, it’s easy to think that this is for the few. But to read letters together and experience that archival work doesn’t have to be an introvert activity gave new insights. The day was concluded with a promise from the library to arrange a new transcribathon, or half transcribathon, for half a day, as well as an intention to make the transcribed letters available and searchable on the Alvin platform, linked to the digitised letters. 

We at the museum are so grateful for everyone who joined us on the day with all their enthusiasm and knowledge, and of course, we are very grateful for the cooperation with the University Library.

From Audio to Text: Mapping Scholarly Editions at the University of Oslo and Beyond

The realm of digital scholarly editions is a dynamic and multifaceted field, constantly evolving with technological advancements and academic inquiry. Within this context, the Surveying Digital Scholarly Editions at the University of Oslo project aims to understand the intricacies and challenges faced by such projects. Completed in May 2025, the BærUt! team’s interviews with leading researchers unveiled critical insight into the lifecycle of digital editions. Central to these discussions was a poignant question: In a dynamic and ever-developing field, when is a project truly finished? As one informant replied when asked about their project’s duration:

“Now I’m a professor of religious history, and we’re used to relating to eternity, because believers look into the eternal. And so does the Bibliotheca Polyglotta.”

Jens Erlend Braarvig, interview from January 25th, 2025

The answer to this seemingly straightforward query unravelled into a complex interplay of position durations, project funding, organisational shifts, and individual life circumstances. It is a question resonating deeply across the academic fabric, as researchers strive for perpetual accessibility and ongoing refinement of their digital work. 

What is the beauty of digital editions if not the promise of eternal accessibility and the potential for clicking “publish” – while still being able to edit and add, for an eternity?

In May 2025, the BærUt! team wrapped up its crucial interview rounds, marking a significant milestone in this project. These interviews, totalling 25 hours across 17 sessions, were a mosaic of diverse dialects and languages, challenging the transcription process with both technical and linguistic hurdles. Navigating any transcription process requires precision and adaptability, and the Surveying Digital Scholarly Editions at the University of Oslo project proved to be no exception. The endeavour to accurately convert spoken words into written records is not only integral to the preservation and understanding of academic dialogue but also reveals inherent complexities. This blog post dives into the detailed transcription methods employed by the BærUt! team, showcasing both the triumphs and tribulations faced as they worked to ensure each interview’s integrity within a multilingual framework. The journey through automated transcription tools and human oversight encapsulates a broader understanding of the challenges and innovations shaping today’s academic transcription landscape and digital scholarly editions.

Interviews and Transcriptions

Between October 2024 and May 2025, we conducted 17 90-minute interviews with researchers in Oslo working with (digital) scholarly editions. By Oslo, we include employees at the University of Oslo, as well as other research institutions, such as The National Archives of Norway, the National Library, the Munch Museum, and the Philological Institute. All in all, we gathered 25 hours of interviews, resulting in 338 pages of transcribed interviews. The audio files were automatically transcribed through the University of Oslo’s application Nettskjema by a selection of different AI models provided in Nettskjema, such as Autotekst with Open AI Whisper V3, Autotekst med NB Whisper, and Autotekst with NB Whisper verbatim.

The interviews were conducted in English, Norwegian, Danish and Swedish, with a variety of dialects and accents. Several of the interviews consisted of participants speaking a different Scandinavian language from the interviewer.

Although Whisper is a general-purpose speech recognition model, trained on a large dataset of diverse audio and can supposedly perform multilingual speech recognition, the options available through Nettskjema resulted in transcriptions in only one language. This means that all interviews that consisted of several Scandinavian languages were transcribed into Norwegian.

This, of course, led to both an amusing and frustrating process of proofreading. Dictionaries were thus consulted several times to ensure that what was heard on the audio recording actually was put down with the right word in the right language. However, the proofreading of Latin and Greek slang used by some scholars is left for those more qualified.

The composure of languages and dialects thus resulted in a long process of proofreading and, on many occasions, manual transcriptions. However, language was not the only problem. Because, as of now, AI probably couldn’t write an article about digital scholarly editions and use the correct terminology.

Despite the promise of AI generators to save us all from tedious work, auto-text transcription seems to know very little about digital scholarly editions. Be it the combination of dialects and languages, or low computational knowledge, the transcriptions were completely reliant on human knowledge to get all the information correct about the scholarly digital editions. One thing is the issues with names (e.g. project lead and interviewer’s name Annika Rockenberger turned into variations of “Rottenberg” or “Roppenberger”), project names (e.g. the Bibliotheca Polyglotta being transcribed as “biblioteket på Rylotta”, “biblioteket på Gråta”, “biblitoeket Børgelotte”, and at least 20 more examples of Biblioteket on a “fake city”), but also technical terms central to surveying digital scholarly editions at the University of Oslo, not to mention Greek and Latin philological terminology.

It Gets Worse Before It Gets Better: Application Updates and IT Confusion

Halfway through the interviews, it happened. The transcription app f4 Transkript was updated and given a complete overhaul. The program became unavailable for some days, and when the university again had access, the interface that met us was completely new. Several new changes had been incorporated, and important features added. One could now both transcribe, code and analyse, all in f4 transkript. Amazing new features and eventually, this would make the work more efficient. Unfortunately, the changes were still so new that university IT were unable to assist when I found myself frantically searching for how to even save my transcriptions as .txt files. Without drawing superficial parallels, it was evident that we ourselves were challenged by questions surrounding the sustainability of scholarly digital editions at the University of Oslo and the Oslo area. Because how are we to navigate the available tools meant to streamline working hours, when the main IT support is not in sync with program developers? The time put in by researchers to educate themselves hopefully meets the efficiency expectations. For the analysis part of the project, the updates to f4 were a blessing in disguise, after butting heads with the analytical tool NVivo. However, yet another lucky accident. And while for now, we can sit back and sigh in relief, lucky accidents will not lead to sustainable working practices within scholarly digital editions at UiO and beyond.

The transcription process for the Surveying Digital Scholarly Editions project proved to be a testament to the complexities of modern academic work. As researchers harnessed a combination of AI technologies and manual proofing, the interplay of language diversity and technical precision highlighted the indispensable role of human intervention. Despite the difficulties presented by software updates and multilingual transcriptions, these obstacles fostered resilience and innovation, enhancing the team’s ability to overcome linguistic and technical challenges. The work is a constant ringing in the ear, reminding one of the constant shifting between multilayered institutional co-operations, the vulnerability of academic hiring processes, and technical developments.

Two people in an interview session.

From Questionnaire to Interview: Mapping Scholarly Editions at the University of Oslo and Beyond

In a previous blog post (June 2024), we1 discussed launching a comprehensive survey to map Digital Scholarly Editions (DSEs) at the University of Oslo and cultural heritage and research institutions in the greater Oslo area. We finished our survey only a few weeks ago, but along the way, we have had to change our approach. Let’s talk about it!

When we first set out to map the status quo of digital scholarly editions here in Oslo, I estimated that some 50 scholarly editions would have been produced in the last 30 years at the university and other higher education and cultural heritage institutions in the area. We thought that getting a good overview would be achievable with these numbers and help build a solid foundation with numbers to support it. We hoped that 2.5 months would be sufficient to get people to fill in the questionnaire, which was on the longer side. When the end of August 2024 approached, we realised we wouldn’t get any more responses than the three we had received, one of which was only partly filled in.

The feedback of one respondent (who had dropped out of the questionnaire early on) made us understand that our project was too ambitious regarding timeline and scope. This resulted in a questionnaire that was way too extensive and had too many detailed and technical questions. We needed a different approach if we wanted researchers, cultural heritage experts, and developers to answer in such detail.

By early September 2024, my colleagues and I decided to change the questionnaire into a series of interviews instead. This would mean more work for us but hopefully less for the others, if only in terms of perceived time and effort. It would also allow us to get a look behind the curtains, as we assumed that researchers would share more of their deliberations, experiences, and emotions tied to their scholarly edition projects with us. In October 2024, we started with the first interview.

The final interview took place at the beginning of May 2025, and we gathered insights from 17 interviewees. We talked about individual edition projects, large collections of complete works and writings, and major classics and historical sources series, both in digital and printed formats. Our informants were researchers, developers, editors, technical leads, and project managers, many of whom spent significant chunks of their academic lives working on editions.

We will analyse the interviews in depth in the coming months; as we had hoped, they offer much more than plain statistics on which tools or encoding standards were used. In almost all interviews, there is a heartfelt wish to preserve the declining art and techne of preparing scholarly editions, let alone digital ones. We observe a generational shift, where those trained in textual scholarship and digital editions in the early 1990s now approach retirement age and fear that with them, their knowledge and skills disappear again from the primary higher education and cultural heritage institutions.

I will accompany my analysis of the interviews with a series of blog posts throughout 2025. I hope to formulate a couple of hypotheses about the reasons for the perceived decline in DSE knowledge and skill on the one hand, the outstanding achievements of individual scholarly editors with their creative approaches to data modeling and publication on the other hand, and the reasons for the absence of any form of sustainable infrastructure for maintenance and preservation of digital scholarly editions in Norway.

  1. About the use of “we”. This article is mainly written by Annika Rockenberger. They write based on previous work by Johanne Emilie Christensen and Federico Aurora, who contributed to designing the questionnaire and the interview guide, recruiting interviewees, conducting the interviews, and transcribing them. The work of this mapping study is collaborative, and authorship is distributed equally, while individual team members perform specific tasks. For questions, comments, and feedback, contact Annika Rockenberger (corresponding author). ↩︎
A detail from the title page of Irshad al-alibba, an Arabic itinerary from the 19th century.

Arabic in the Spotlight – Creating Ground Truth for a New Text Recognition Model

At the beginning of February, a group of enthusiasts met at the University of Oslo library for the first text recognition sprint, aka Transcribathon for Arabic scripts.

Digital Methods Research Support

Working in the research support section of the university library, one meets many researchers from all over the university. During my consultations, I have talked with quite a few who want to work with their textual material digitally, anything from text mining to collaborative annotations and digital editions. And how hard it is to do so with Arabic texts, whether modern ones or historical manuscripts. When asked about text recognition algorithms for Arabic, I often had to disappoint them by pointing out that there either isn’t a good enough model out there or that the good ATR models, which are available, require software to run in that is anything but user-friendly or beginner-friendly.1 So they were stuck.

Eventually, I thought the best way to support those researchers would be to help them create their ATR models for the scripts they work with. Are there enough people who work with roughly the same scripts? A combined effort could result in a usable model for text recognition in a virtual research environment that is beginner-friendly, easy to use and reliable.

Irshād al-alibbā – Our Starting Point for Arabic ATR

We decided to give Transkribus a try to develop a model for Arabic scripts.2 We gathered half a dozen researchers from different departments at UiO3 to do a manual transcription sprint that would create enough ground truth to train the first ATR model. The group decided to work with the black/white scans of a peculiar 800+ page Arabic itinerary from the late 19th century, Irshād al-alibbā by Muhammad Amin Fikri. It is a late 19th-century clean print sample from the Muqtataf press in Cairo, Egypt.

Arabic print has been relatively stable since the spread of the printing press in the 1800s, and a model that can read such texts is therefore relevant for researchers using primary sources in the Arabic language spanning over two centuries.

The group decided on transcription rules for the sprint:

  • We transcribe from Arabic script to Arabic script (no transliteration);
  • We transcribe the text, incl. diacritics and punctuation, “as it is”, with lowercase letters and page numbers (Indian);
  • Frames or decorations that separate sections from each other are not transcribed;
  • We use { } (curly brackets) for the decorative brackets around page numbers;
  • Where there was no space between harbour (mina) and harbour names, we agreed to introduce spaces.

Transcription Sprint

We started the sprint with a short introduction to ATR and the Transkribus user interface. Afterwards, the participants each had 10 pages of text to transcribe in three 45-minute sessions. This setup was heavily inspired by the well-tested “Shut Up and Write” concept developed at the Academic Writing Center at the University of Oslo Library. We served coffee through the sprint and had lunch together.

Participants were encouraged to work quietly; however, questions could always be asked and helping each other with reading and transcribing the historical text was always possible. After we had exhausted our brain capacity for transcribing and staring at the screen, we discussed how best to continue the project.

The documents in question needed to be prepared to make a transcription sprint possible. I created eight sets of ten pages each for participants to work with. I ran “Universal Lines” baseline recognition on all sets to detect lines. The result was surprisingly bad, with many lines being chopped up into small bits that had to be merged later or re-drawn. Where necessary, participants fixed the layout before they started transcribing. Once the pages were prepared, the transcription was done rather quickly.

We had some smaller technical issues with the user interface, which could be resolved ad hoc by using a simple Word document for the line-by-line transcription, which was later copied over to Transkribus.

Ground Truth and Training the Model

After the day, we produced 28 pages of ground truth. I touched up the lines where needed and trained the first iteration of the Oslo Arabic model, Fikri_v1, already on February 10th. I trained fikri_v1 with 6.600 words and a CER of roughly 14%. This was about what I had expected and already a better result than we had with the initial test of the Arabic Khat model in January.

The group decided to continue transcribing pages independently, and I will assist them with training another iteration of the ATR model, which hopefully will increase the CER significantly.

Besides technology trouble and other smaller hindrances, a transcription sprint is a very fun way of completing a tedious task. I highly recommend it! By now, we have about 70 pages transcribed for the next training iteration.

We aim to produce a good enough ATR model for 19th and 20th-century Arabic (print) and depart from there to train more specialised models for each participant’s materials. The primary model is planned to be made publicly available as soon as it achieves a CER below 9%.

  1. This is not the place to discuss thoroughly the pros and cons of the available ATR models out there and the communities and platforms they are available in. In short: If you are willing to learn how to set up and maintain eScriptorium, this is the place where you should go for re-using and training ATR models. It is open-source, free software with models shared publicly and available for download. However, in many cases, researchers do not have the time and capacity to become good at DIY software; they also don’t all have IT assistance to cover this part. So, for those who need help, I always recommend a tool like Transkribus. It is user-friendly, runs in any browser, and has a strong community that builds it. Many public models can be re-used and shared within the platform. The costs are OK. I am all for open source and free software for developing one’s tools and learning how to translate research questions into computational instructions; I think, however, that those who work primarily in research support must meet the researchers where they are: at their level of technical and technological skills and their capacity for also setting up, running and maintaining software stacks – or their lack thereof. ↩︎
  2. At the beginning of 2025, a public model for Arabic script was released on the Transkribus platform: it performed with about 15% CER. By the time of writing this blog post, there are three models publicly available, two of which perform with a CER of below 9%, which is rather good. There is another multilingual mode,l including Arabic (script), which performs at 2%, which is excellent. However, these models are trained on different materials and different periods. We tested our material with the only available “Arabic Khat” model, which performed poorly. That doesn’t mean the model is useless; it is only that it doesn’t work with our material. Models capable of working with different layouts, typography, and language specificities tend to be trained on a relatively large number of documents with over 100.000 words. We’re not there yet for Arabic. ↩︎
  3. A historian from the Dept. of Archaeology, Conservation and History; two scholars of Islamic studies, a political science researcher, and a teaching assistant from the Department of Culture, Religion and Asian and Middle Eastern Studies; and a historian of religion from the Faculty of Theology. ↩︎

Using ReproZip-Web for Archiving and Preservation of Digital Projects and Other Complex, Dynamic Websites

Digital – and Digital Humanities (DH) – research projects face many archiving challenges. Web hosting can be expensive to maintain, and these works are often built without long-term preservation of the work in mind. An archival plan is of course, however, a very important consideration for any digital project, as well as for the libraries hoping to preserve them.

For straightforward websites without a dynamic server component, there are many web archiving tools that can successfully capture text and images, and even some advanced, high-fidelity web harvesting tools, like Browsertrix, that can capture social media feeds and a good deal of other dynamic content. But for projects with a server-side component, such as a database-reliant site, the traditional client-side web archiving tools fail to capture the look, feel, and functionality of the work. 

A new grant-funded web-archiving tool (and the first server-side web archiver that we know of) called ReproZip-Web addresses this problem. It leverages the crawling functionality of Webrecorder with the computational reproducibility software ReproZip in order to encapsulate a dynamic web server with its dependencies and the front-end user interface. ReproZip-Web creates a self-contained, isolated, and preservation-ready bundle with all the information needed to reproduce a DH project. This bundle, an .rpz file, contains all of the files and software dependencies needed to replay the project, including the source code, the computational environment (e.g., the operating system, software libraries) and files used by the app (e.g. data, static files).

Screenshot of the ReproZip-Web homepage

There are many challenges to server-side archiving, however. The first is access; you must have access to all of the servers where the app is deployed/hosted in order to capture it with ReproZip-Web. The next is related to technological skills, as familiarity with the command-line interface is needed in order to start, trace, and pack the app. A final challenge is the status of the site itself, as the website/project must be complete and working, and cannot be missing any data or dependencies, at the time of archiving.

Let’s look at the first step to archiving with ReproZip-Web, the tracing step. We first use ReproZip to capture the backend assets. This requires a Linux operating system setup and access to all of the servers where the app is deployed and hosted.

Execute ReproZip at the same time as running the dynamic web app. ReproZip then uses ptrace, an internal Unix system utility that lets one process (ReproZip) observe and control the execution of another process (the web app). ReproZip notes down everything that the web app touches as it executes into a Sequel Lite (SQLite) database that is then used to create a configuration .YML file with a ton of administrative and technical metadata about what happened during the web app’s execution. ReproZip will trace and record what version of which software was used, what operating system was used, any input and output files, and provenance of what runs in what order (in case multiple commands are used to get the app running).

Image of ReproZip-Web packing details: ReproZip packs and traces input files using ptrace and SQLite, generates metadata to a .yml config file, then packs it into a .rpz budle file.

You can edit the config file before the package is created, but we generally discourage it because it can affect how the app runs later down the line.

Once we have an initial capture of the backend assets with ReproZip, we then upload the resulting .RPZ file into ReproServer to record and capture the front- end assets.

Image of the front-end capture process with ArchiveWeb.page and ReproServer: upload the .rpz bundle on reproserver, follow the steps to crawl the front-end assets, then download the final preserveration-ready bundle, an .rpz file with a .wacz embedded inside.

The interface on ReproServer will prompt us to choose between automatic or manual web recording. Begin recording the front-end assets by interacting with the site you are archiving. When finished, the output will be a preservation-ready .rpz bundle with a web archive collection zipped, or “.WACZ,” file inside! The lightweight nature of .RPZ files makes them ideal for distribution and preservation.

Please feel free to test out ReproZip-Web on one of your own projects, and let us know what you think!