This page aggregates blog entries by people who are writing about TeX and related topics.
TUGboat volume 44, number 2, has been mailed to TUG members. It is also available online and from the TUG store. In addition, prior TUGboat issue 44:1 is now publicly available. Submissions for the next regular issue are also welcome and encouraged; that deadline is October 15, 2023. Finally, please consider joining or renewing your TUG membership if you haven't already (we'll send this issue immediately), and thanks.
Right from the first version, siunitx has supported uncertainty values in numbers. Uncertainties are a key piece of information about a lot of scientific values, and so it’s important to have a convenient way to present them. The most common uncertainty we see is one that is symmetrical, a value plus-or-minus some number, for example 1.23 ± 0.04. This could be a standard deviation from repeated measurement, or a tolerance, or derived some other way. Luckily for me, the source of such a value doesn’t matter: siunitx just needs to be able to read the input, store it and print the output. For both reading and printing, siunitx has two ways of handling these symmetrical uncertainties A ‘long’ form, in which the uncertainty is given as a complete number, for example 1.23 ± 0.04 A ‘compact’ form, in which the uncertainly is shown relative to the digits in the main value, for example 1.23(4) In version 3 of siunitx, I took that existing support and added a long-requested new feature: rounding to an uncertainty. That means that if you have something like 1.2345 ± 0.0367 and ask to round to one place, the uncertainty is first rounded (to 0.04), then ...
Today’s blog post is a teaser for a video class called Computer Vision for Digital Humanities (funded by CLARIAH-AT with the support of BMBWF). The self-learning resource (video lessons plus Jupyter notebooks) is an introduction to Computer Vision methods for Digital Humanists. It addresses some Humanities issues that many typical introductions to computer vision do not cover. This post is an example of such reflection. It gives an insight into the first exercise of the class, filtering a list of metadata to create a ground truth dataset for training a classification algorithm. This blogpost does not contain the actual data (so you stay tuned for the video class!) but discusses the issues which arise when creating a ground truth data set for computer vision using Humanities data. Citation suggestion: Suzana Sagadin & Sarah Lang, How to create a ground truth data set for computer vision using Humanities data, in LaTeX Ninja Blog, 04.07.2023. https://latex-ninja.com/2023/07/04/how-to-create-a-ground-truth-data-set-for-computer-vision-using-humanities-data/ Goal of this session This exerciseread more How to create a ground truth data set for computer vision using Humanities data
During the annual conference of the DHd Association, the Empowerment Working Group organized a workshop on the topic of Data Feminism in the Digital Humanities (organized by Luise Borek, Nora Probst & Sarah Lang, technical support: Yael Lämmerhirt)[1]. This short blog post aims to present preliminary results to document the event and raise awareness for this essential topic. Everyone is invited to participate in the project and should contact the Empowerment Working Group if interested. Citation suggestion: Luise Borek*, Elena Suárez Cronauer, Pauline Junginger, Sarah Lang, Karoline Lemke & Nora Probst, Data Feminism as a Challenge for Digital Humanities? [English version], in LaTeX Ninja Blog, 01.07.2023. https://latex-ninja.com/?p=5068 *All authors contributed equally. Disclaimer: This is a machine-translated version of the original German article (found here), powered by ChatGPT 4. I read over it to make sure there’s nothing wildly inappropriate in there but since terms used are crucial when it comes to this topic, the German version is the one weread more Data Feminism as a Challenge for Digital Humanities?
TUGboat volume 44, number 1, has been mailed to TUG members. It is also available online and from the TUG store. In addition, prior TUGboat issue 43:1 is now publicly available. The next issue will be the TUG'23 proceedings; the deadline for papers to be included there is July 23, 2023. Early submissions for the next regular issue are also welcome and encouraged; that deadline is October 15, 2023. Finally, please consider joining or renewing your TUG membership if you haven't already (we'll send this issue immediately), and thanks.
After having worked about 18 years on getting Debian users a great TeX experience, things have turned sour between Debian and me. So I think it is time to look a bit at my...
TikZ galleries
Get the Champagne ready, we have released the final images of TeX Live 2023. The biggest change in this year’s release is the switch to 64bit Windows binaries, and renaming the binary directory from...
The pretest for TeX Live 2023 is in process, for anyone who'd like to help with the upcoming release. The new binaries are presumed stable at this point, barring bug reports, so please try it out with your own documents if you have a chance.
Modelling is central to the Digital Humanities. Even so much that some claim it is what unites the DH as a field or discipline! But what is modelling? What do we mean by it anyway? This post will hopefully provide you with the primer you need. Sorry for the very sporadic blogging lately. I still haven’t figured out how to include blogging into my PostDoc life. I think I want to get to a rhythm of around 1-2 posts per month. More than that is absolutely not realistic but, as you may have realized, I didn’t even manage that consistently over the last year. Then again, it’s not like I’m not producing teaching materials anymore. Most of my efforts this year have gone into all the classes I have been teaching (I’m hoping to share slides and teaching materials for all of them once they are cleaned up) – I have taught an intro to text mining, my usual informationread more What’s the deal with modelling in Digital Humanities?
A great welcome to the new year — wnarifin has agreed to take up the maintenance of usmthesis! Please follow new updates at https://github.com/wnarifin/usmthesis. The old repo is now archived. Thank you again wnarifin!
I’ll cut to the chase: Cuti-cuti Malaysia calendar for 2023! PDF for Penang version: download here. For other states download the .zip or clone this Overleaf project to your own Overleaf account. Change \def\mylocation{Penang} to e.g. \def\mylocation{Selangor} If you would just like a calendar without the Malaysian holidays and/or Chinese lunisolar calendars, see this Github […]
TUGboat volume 43, number 3, has been mailed to TUG members. It is also available online and from the TUG store. In addition, prior TUGboat issue 43:2 is now publicly available. Submissions for the next issue are welcome; the deadline is March 26, 2023 (early submissions are especially appreciated). Finally, please consider joining or renewing your TUG membership if you haven't already (we'll send all issues for the year immediately), and thanks.
Today’s post is a short introduction to digital scholarly editing. I will explain some basic principles (so mostly theory) and point you to a few resources you will need to get started in a more practical fashion. I’m teaching a class on digital scholarly editing this term, so I thought I could use the opportunity to write an intro post on this important topic. How does a Digital Edition relate to an analogue scholarly edition? Unlike analogue scholarly editons, digital editions are not exclusive to text and they overcome the limitations of print by following what we call a digital paradigm rather than an analogue one. This means that a digital edition cannot be given in print without loss of content or functionality. A retrodigitized edition (an existing analogue edition which is digitized and made available online), thus, isn’t enough to qualify as a digital edition because it follows the analogue paradigm. Ergo: It’s not about the storage medium. Aread more What you really need to know about Digital Scholarly Editing
It is quite natural to think that separating a word up into individual characters is quite easy. It turns out that for the computer this isn’t really the case. If we look at a system that understands Unicode (like XeTeX or LuaTeX), most of the time one ‘character’ is stored as one codepoint. A codepoint is a single character entity for a Unicode programme. For example, if we take the input café, it is made up of four codepoints: U+0063 (LATIN SMALL LETTER C) U+0061 (LATIN SMALL LETTER A) U+0066 (LATIN SMALL LETTER F) U+00E9 (LATIN SMALL LETTER E WITH ACUTE) So we could in XeTeX/LuaTeX use a simple mapping to grab one character at a time and do stuff with it. However, that’s not always the case. Take for example Spın̈al Tap. The dotless-i is a single codepoint, but there is not a codepoint for an umlauted-n. Instead, that is represented by two codepoints: a normal n and a combining umlaut. As a user, it’s clear that we’d want to get a single ‘character’ here. So there’s clearly more work to do. Luckily, this is not just a TeX problem and the Unicode Consortium have thought about it for ...
Subscribe to the TeX community RSS feed
Do you write about TeX and related topics? Let us know and we will add your feed to this page.