iPad Mini + Tailscale + Screens + Better Display = Relibale screen sharing to MacBook

Here’s the problem. I have my MacBook at the office on campus. I want it to be running through some long image processing tasks that are taking days. It requires an external hard drive. I don’t want to disconnect it. However, sometimes, I need to log in to do work, like post grades over the holiday.

What I want is to be able to log into the Mac remotely and use it to do other work if I’m at home. The problem: I don’t have another computer. I tried to get an old one, but I can’t.

I do have an iPad Mini, though.

However, the first time I tried to connect the iPad Mini to the Mac via screen sharing, I couldn’t get a reliable connection, the resolution of the display was off, I couldn’t get that keyboard mapping to work well, and it was just clunky.

I’ve solved it now.

What works?

First, to get a reliable connection between the two devices, I’ve found Tailscale works reliably. It lets me create a virtual network between the iPad and the MacBook, despite the fact one is on a university network and the other is at home. This allows me to connect between both devices as if they were on a local network, despite the fact they’re not.

To actually make the shared screen connection, I use Screens by Edovia. It works well and is a native iPad and Mac app.

For a while, I did have a hard time connecting from an external keyboard connected to the iPad mini because the iPad OS was grabbing the Command keys. If I tried to Command+Tab on the shared Mac, the iPad OS would Command+Tab to another iPad OS app. Perhaps this is logical, but as someone with a lot of muscle memory on the Mac, it was super annoying—almost a show stopper—to be working on the Mac, then suddenly in a different OS.

By inverting the Command and Control in the Keyboard settings on the iPad and then inverting it again in the settings of the Screens app on the iPad, it worked perfectly on the shared Mac display, which is my goal.

Next issue: the display resolution on the Mac did not match the resolution of the iPad Mini.

The solution: Better Display, a $20 Mac app that can create lots of custom resolutions and, crucially, has an option to create a virtual screen the size of the iPad mini. This means the full screen of the iPad mini is shared. This allows me to connect the iPad to a virtual display on the Mac that matches the iPad’s physical dimensions.

So, after an hour: the resolution matches the iPad Mini, the keyboard keys are perfectly mapped, the connection is reliable, and I can log in to the Mac from home with just an iPad.

Not bad. Now to post some grades.

dejatext.py

DejaText is a Python script for identifying duplicate and similar text in a directory of text or markdown files. It scans a directory of .txt' or.md’ files, identifies duplicate and similar text segments, and produces organized reports for easy review. As part of my writing, I find it useful to go through a project and flag repeated words, phrases, or sentences. DejaText helps me with this.

q_transcribe

I want to introduce q_transcribe a simple tool to transcribe images using QWEN 2 VL AI models.

What did I do to write q_transcribe? I’ve added some simple logic to a CLI wrapper that Andy Janco wrote to run QWEN 2 VL from the CLI.

q_transcribe can be used to transcribe typed and handwritten text from any image.

How could it be used?

  • Transcribe handwritten notes. One of the methods I use is freewriting longhand. Notetaking is often the first step in my writing process. But, at times, it can feel a slog to transcribe 20 page of handwritten notes. Enter, q_transcribe.
  • Transcribe handwritten archives. One of the projects I am working on with colleagues is an archival project in Colombia. We’re using QWEN 2B to extract text from images as part of a longer pipeline.

q_transcribe is a simplification of our workflow, which works on an image, a folder of images, or a folder of folders of images.

What is my contribution? I added logic to Andy Janco’s CLI wrapper to QWEB 2 VL’s sample code. My logic handles JPG, JPEG, or PNG files, sorts them, skips files that have already transcribed, and chooses between a CUDA (Nvidia GPU), MPS (Apple Silicon GPU), or CPU.

In my testing, it works with QWEN 2B on my M1 MacBook Pro with 16 GB of RAM, and on a https://lightning.ai server which offers free access to a GPU for researchers.

To install, clone the repository from GitHub, install the necessary dependencies, and then run.

   git clone https://github.com/dtubb/q_transcribe.git
   cd q_transcribe
   pip install -r requirements.txt
    python q_transcribe.py images

Structur.py

Structur is a simple, Python-based command-line tool to help extract and organize coded text from research notes.

I’ve been using it for a year now, from the Finder. It’s useful to find the structure of longer pieces of text.

I was inspired by John McPhee’s writing process, which he describes in Draft No. 4.

Structur exploded my notes. It read the codes by which each note was given a destination or destinations (including the dustbin). It created and named as many new Kedit files as there were codes, and, of course, it preserved intact the original set. In my first I.B.M. computer, Structur took about four minutes to sift and separate fifty thousand words. My first computer cost five thousand dollars. I called it a five-thousand-dollar pair of scissors.

Structur is my take on what McPhee describes.

It is available on GitHub.

Cite as:

Tubb, Daniel. Structur.py. GitHub, 2024. https://github.com/dtubb/structur.