The most important bit of metadata for a terminal screen reader is actually the cursor location. This is the one bit of information that is required to make a unstructured grid of characters surprisingly useful to a person mostly constraint to linear reading.
The terminal cursor acts as the focus. It indicates which part of the screen is currently being edited or selected. For editors, this is a pretty well understood concept. Where the cursor is, the next character will be inserted. However, there is more to a text-based user interface then just editing fields. And the cursor is not always visible.
A while ago, I looked at the brick terminal UI library for Haskell. When I tried its various demo programs, I noticed that my screen reader was reporting the cursor being in the lower right corner of the application when a menu item was selected. I had to manually investigate the screen and look at attributes (colours) to figure out which item in a list was currently selected. Amongst BRLTTY developers, we’ve decided long ago to not work around these issues on the screen reader side, rather try and make use of open source and fix the problems whenever we see them in the wild. Behind the scenes, we have fixed a bunch of frameworks and applications to place the cursor at the locus of focus. So I set out to understand brick internals to fix this.
brick works different from most TUI frameworks I know. You don’t have a single cursor which you set to a particular location. Rather, all the different components drawn on the screen can declare their own cursor location, and the composition mechansim ultimately chooses which cursor should be used.
This, while being pretty flexible, looked fundamentally wrong to me. Why? It doesn’t reflect the reality of a terminal.
There is no such thing as “no cursor” in a terminal. The cursor might be hidden, so it is not rendered on the screen. But the cursor still has a location where it sits and waits to print the next output character to the screen. A screen reader will pick that location up, no matter if the cursor is visible or hidden.
So after patching brick to declare a cursor when rendering certain list items, checkboxes and radio items, I realized the actual missing bit. brick had no concept of cursor visibility. Rather, if the composition mechansim did not see a cursor declared, it would hide the cursor on-screen and “pretend” there was none. However, as we have learnt above, thats just not true and programs like screen readers actually rely on the cursor location to indicate the locus of fucs.
Raising this issue with the brick maintainer uncovered the fact that the low level vty library used by brick to do the actual terminal output did not have a concept of a hidden cursor either. Jonathan fixed this in vty 5.33.
And since vty 5.33 is now in stack LTS, I thought to myself yesterday it is time to finally add support for invisible cursors to brick.
There is now a new function putCursor which has the same type signature as the already existing showCursor, but will make sure the cursor is not visible on-screen. This can and should be used to place a cursor at the locus of focus, even if that location is already visually indicated by different attributes.
The stock widgets that come with brick should now all be screen reader friendly. If you happen to maintain a brick application which provides a render function to something like renderList, please consider extending it to use showCursor. Once brick 0.64 is released, you can change to putCursor to clean up the visual appearance of your program.
]]>How this can be achieved will depend on your document processing system. In this article, we will cover Pandoc as it is used in the Hakyll static site generator.
I am the author of a program (BMC) to parse and transform Braille Music Code. One of its most basic features is to pretty print the parsed input, which can be used to reflow braille music code according to its peculiar hyphenation rules. It would be useful to plug this functionality into Pandoc such that certain codebocks could be automatically checked for validity and formatted for a certain line width.
Hakyll is basically a high level build system for static websites. It has a Compiler type which is responsible for doing something with your input data. The most important Compiler in Hakyll is the pandocCompiler, which uses Pandoc under the hood to read your input data and write it back as HTML.
What we need is a way to hook into this mechanism so that we can transform the underlying Pandoc AST before it gets passed to the Pandoc writer.
The main entry point to write a pandocCompiler which transforms the AST is the function pandocCompilerWithTransform which has the following type signature:
pandocCompilerWithTransform :: ReaderOptions -> WriterOptions
-> (Pandoc -> Pandoc)
-> Compiler (Item String)
Ignoring the options, it takes a function from Pandoc to Pandoc and returns a Compiler which will ultimately produce the result of the pandoc writer as a String.
This can be enough if your transformation can never fail. However, it is likely that if you want to do your own pre-processing, you are also interested in reporting errors and making the build process fail in case something went wrong. What we need is an effectful version of the same function. pandocCompilerWithTransformM is just that.
pandocCompilerWithTransformM :: ReaderOptions -> WriterOptions
-> (Pandoc -> Compiler Pandoc)
-> Compiler (Item String)
The Walkable typeclass from the pandoc-types package allows to walk a Pandoc bottom-up, replacing all the occurences of a Block with the result of applying a function to it.
In particular, we want to use walkM since we want to make use of the
Hakyll Compiler monad. Here a
will be Block
and b
will be Pandoc
and
m
will be Compiler
.
walkM :: (Monad m, Applicative m, Functor m) => (a -> m a) -> b -> m b
So our transform function will look something like this:
transform :: Pandoc -> Compiler Pandoc
= walkM codeBlock
transform
codeBlock :: Block -> Compiler Block
The Pandoc type consists of metadata and a list of Block
s. The Block
type contains the bulk of the structural elements of a document.
The pandoc command line program can dump its internal
representation when the native
output format is selected.
This can be used to figure out what we can match.
~~~{#id .class name=value}
content
~~~
piped to pandoc -t native
will print
CodeBlock ("id",["class"],[("name","value")]) "content"] [
With this information, we can write a function which matches on a specific CodeBlock class and pipes the content through an external program. At this point, you can do pretty much anything. Validating syntax. Reformatting code. You name it.
CodeBlock (ident, ["bmc"], namevals) content) = do
codeBlock (let toArg (a, b) = ["--" ++ Text.unpack a, Text.unpack b]
let args = concatMap toArg namevals
<- unsafeCompiler $
(ec, out, err) "bmc" (args ++ ["-"]) content
readProcessWithExitCode case ec of
ExitSuccess -> pure $ CodeBlock (ident, ["bmc"], namevals) out
ExitFailure _ -> fail $ Text.unpack err
= pure x codeBlock x
And now we can write Braille Music code and be sure it passed validation.
```{.bmc locale=de width=12}
!{ihg&gfeyefg{ihg zhhh&hhh%iii{ihg2k
```
⠐⠷⠊⠓⠛⠯⠛⠋⠑⠐
⠽⠑⠋⠛⠷⠊⠓⠛
⠵⠓⠓⠓⠯⠓⠓⠓⠐
⠿⠊⠊⠊⠷⠊⠓⠛⠣⠅
Putting it all together, here is the source code of the BrailleMusicCompiler module.
{-# LANGUAGE OverloadedStrings #-}
module BrailleMusicCompiler ( brailleMusicCompiler ) where
import Data.Text (Text)
import qualified Data.Text as Text
import Hakyll ( Compiler, Item
, defaultHakyllReaderOptions, defaultHakyllWriterOptions
, pandocCompilerWithTransformM
, unsafeCompiler )import System.Exit ( ExitCode(..) )
import System.Process.Text ( readProcessWithExitCode )
import Text.Pandoc ( Block(CodeBlock), Pandoc )
import Text.Pandoc.Walk ( walkM )
brailleMusicCompiler :: Compiler (Item String)
=
brailleMusicCompiler
pandocCompilerWithTransformM defaultHakyllReaderOptions
defaultHakyllWriterOptions
transform
transform :: Pandoc -> Compiler Pandoc
= walkM codeBlock
transform
codeBlock :: Block -> Compiler Block
CodeBlock (ident, ["bmc"], namevals) content) = do
codeBlock (let toArg (a, b) = ["--" ++ Text.unpack a, Text.unpack b]
let args = concatMap toArg namevals
<- unsafeCompiler (bmc args content)
result case result of
Left e -> fail $ Text.unpack e
Right r -> pure $ CodeBlock (ident, ["bmc"], namevals) r
= pure x
codeBlock x
bmc :: [String] -> Text -> IO (Either Text Text)
= do
bmc args music <- readProcessWithExitCode "bmc" (args++["-"]) music
(ec, out, err) pure $ case ec of
ExitSuccess -> Right out
ExitFailure _ -> Left err
To use your new custom pandoc based compiler, all you have to do is
replace pandocCompiler
in your existing site.hs
with whatever
you choose as name for your custom compiler. For instance, this article
has been processed with the following match
rule in site.hs
.
"blog/*" $ do
match $ setExtension "html"
route $ brailleMusicCompiler
compile >>= saveSnapshot "content"
>>= loadAndApplyTemplate "templates/post.html" (postCtx tags)
>>= loadAndApplyTemplate "templates/default.html" (postCtx tags)
>>= relativizeUrls
This article is part of Advent of Haskell 2020.
]]>Since about two weeks, I am no longer able to use Google with my favorite text-mode web browser, Lynx.
It started about a month ago, when I noticed that sometimes, after submitting my search query, I was presented with a search result page which didn’t allow to invoke the actual links. When I reloaded the start page, it suddenly worked again. So I guess Google was doing experiments with its users. If someone didn’t actually click on any result links, and reloaded the main page, they gave them the old start page.
But now, the redesign seems to be finalized, and I am no longer able to use Google with Lynx at all.
I don’t have a X11 session open all the time, and I don’t have a Windows PC running next to my Linux workstation. So I don’t have an easy way to switch to a graphical browser just to be able to research things while I do my work.
Luckily, there is duckduckgo. However, I have to admit, the search results of duckduckgo are by far inferior to what Google used to give. However, being a blind person, I guess I have to accept that Google doesn’t care anymore.
Maybe I should delete my Google account as a consequence.
Bye bye mainstream, hello ghetto.
This rant has been featured on Hacker News. A Google dev noticed the thread and managed to get basic Lynx support back online in just a few hours. I am impressed and grateful. However, the new design is still a step backwards. It is less clear which link will take you to which site, and there is no way to retrieve cached versions of websites anymore.
]]>Non-native english speaking blind people have their default speech language typically set to their native language. When they end up browsing to a site in english (or any language other than their native one for that matter) the screen reader starts to read english with pronounciation from their native language. While some people start to understand such speech output after a while, it is really a pain to work with. Of course, you can switch to a different speech language manually, but that takes time, and people end up not doing it in a lot of situations.
Some screen readers have automatic language detection implemented, but it fails to work correctly in many cases, which is why most users have autodetection actually turned off.
Use the lang=
attribute to declare what language your document (or
parts of your document) uses.
A lang=
attribute on the top-level <html>
tag will let screen
readers know what the default language of this document is.
This is a very simple change that you might be able to do in a few seconds/minutes, depending on what framework you use.
Please consider declaring your document language, it will make the overall experience of surfing the net for non-english blind users a lot nicer.
I am writing this article because I got frustrated with Hacker
News not declaring lang="en"
. Whenever
I visit the site, I get all the content read with a german speaker.
However, HN is definitely not the only bigger site that gets this wrong.
If you are maintaining websites, please take the time and check if you are declaring the document language. If not, please consider adding this very small change to your site.
A few years ago, I created a small web game to demonstrate what sort of wrongly-pronounced words you have to deal with as a blind user if speech language settings do not fully work.
You can find it here.
]]>I know, it is really late, but two days ago, I discovered Racket. As a Lisp person, I immediately felt at home. And realizing how the language dispatch mechanism works, I couldn’t resist and write a Racket implementation of MarioLANG. A nice play on words and a good toy project to get my feet wet.
Racket programs always start with #lang
. How convenient.
MarioLANG programs for Racket therefore look something like this:
#lang mario
++++++++++++
===========+:
==
So much about abusing coincidences. Phew, this was a fun weekend project!
And it has some potential for more challenges. Right now, it is only an interpreter,
because it appears to be tricky to compile a 2d instruction “space” to traditional
code. MarioLANG does not only allow for nested loops as BrainFuck does,
it also includes weird concepts like the reversal of the instruction pointer direction.
Coupled with the “skip” ([
) instruction, this allow to create
loops which have two exit conditions and reverse code execution
on every pass. Something like this:
@[ some brainfuck [@
====================
And since this is a 2d programming language, this theoretical loop could be entered by jumping onto any of the instruction inbetween from above. And, the heading could be either leftward or rightward when entering.
Discovering these patterns and translating them to compilable code is quite beyond me right now. Lets see what time will bring.
]]>Apparently, ranting about it after a year of being ignored was not the worst thing to do. I can now confirm that the current dev version of Qt works properly with JAWS for Windows and QTextEdit widgets. This is quite a substantial fix, as it will likely improve the accessibility of many Windows applications written in Qt.
So this bug is finally (after more then a year of waiting) fixed. Thanks to André de la Rocha for implementing UI Automation support, which is apparently what was missing to make JAWS happy.
]]>Most of the functionality is in a compiler-alike backend. But eventually, I wanted to create a user interface to improve the interactive experience.
So, the problem again: which toolkit to choose which would be accessible on most platforms? Last time I needed to solve a similar problem, I used Java/Swing. This has its problems, but it actually works on Windows, Linux and (supposedly) Mac. This time around, my implementation language is C++, so Swing didn’t look that interesting. It appears there is not much that fullfils these requirements. Qt looked like it could. But since I had my bad experiences already with Qt claiming accessibility they really never implemented, I was at least a bit cautious. Around 10 years ago, when Qt 4 was released, I found that the documentation claimed that Qt4 was accessible on Linux, but it really never was until a very late 4.x release. This information was a blatant lie, trying to lure uninformed programmers into using Qt, much to the disservice of their disabled users. If you ask a random blind Windows user who knows a bit about toolkits, they will readily tell you that they hate every app written in Qt.
With this knowledge, and the spirit of “We can change the world” I wrote a private mail to the person responsible for maintaining Qt accessibility. I explained to them that I am about to choose Qt as the UI platform for my program, and that my primary audience is users that rely on Accessibility. I also explained that cross-platform support (esp. good support on Windows) is a necessary requirement for my project. I basically got a nice marketing speak answer back, but when I read it back then, I didn’t fully realize that just yet. The tone basicallly: “No problem. Qt works on Linux, Mac and Windows, and if you find any problems, just report them to us and we are going to fix them.” Well, I was aware that I am not a paying customer of Qt Company, so the promise above is probbably a bit vague (I thought), but still, it sounded quite encouraging.
So off I went, and started to learn enough Qt to implement the simple user interface I wanted. First tests on Linux seemed to work, that is nice. After a while, I started to test on Windows. And BANG, of course, there is a “hidden” problem. The most wide-spread (commercial) screen reader used by most blind people somehow does not see the content of text entry widgets. This was and still is a major problem for my project. I have a number of text entry fields in my UI. Actually, the main part of the UI is a simple editor, so you might see the problem already.
So some more testing was done, just to realize that yes, text entry fields indeed do not work with the most widely used screen reader on Windows. While other screen readers seemed to work (NVDA) it is simply not feasable to ask my future users to switch to a different screen reader just for a single program. So I either needed to get JAWS fixed, or drop Qt.
Well, after a lot of testing, I ended up submitting a bug to the Qt tracker. That was a little over a year ago. The turnaround time of private mail was definitely faster.
And now I get a reply to my bug explaining that JAWS was never a priority, still is not, and that my problem will probably go away after a rewrite which has no deadline yet.
Why did I expect this already?
At least now I know why no blind users wants to have any Qt on their machines.
If you want to write cross-platform accessible software: You definitely should not use Qt. And no other Free Software toolkit for that matter, because they basically all dont give a shit about accessibility on non-Linux platforms. Yes, GTK has a Windows port, but that isn’t accessible at all. Yes, wxWindows has a Windows port, but that has problems with, guess what, text entry fields (at least last time I checked).
Free Software is NOT about Accessibility or equality. I see evidence for that claim since more then 15 years now. It is about coolness, self-staging, scratch-your-own-itchness and things like that. When Debian released Jessie, I was told that something like Accessibility is not important enough to delay the release. If GNOME just broke all the help system by switching to not-yet-accessible webkit, that is just bad luck, I was told. But it is outside of the abilities of package maintainers to ensure that what we ship is accessible.
I hereby officially give up. And I admit my own stupidity. Sorry for claiming Free Software would be a good thing for the world. It is definitely not for my kin. If Free Software ever takes over, the blind will be unable to use their computers.
Don’t get me wrong. I love my command-line. But as the well-known saying goes: “Free Software will be ready for the desktop user, perhaps, next year?”
The scratch-your-own-itch philosophy simply doesn’t work together with a broad list of user requirements. If you want to support users with disabilities, you probably should not rely on hippie coders right now.
I repeat: If you want to write compliant software, that would be also useable to people with disabilities, you can not use Qt. For now, you will need to write a native UI for every platform you want to support. Oh, and do not believe Qt Company marketing texts, your users will suffer if you do.
]]>My girlfriend got us tickets for the Shobaleader One performance at Progy & Bess in Vienna. It was fantastic! 90 minutes of high energy jazz.
As a personal memory, I captured one of my favourite Squarepusher tracks, Cooper's World. This is another case of #unseenphotography.
While I am usually not very much into jazz, I like this fusion of dnb and jazz very much.
]]>I have only tested the most basic distributed GlusterFS setup. No replication whatsoever. We have two GlusterFS servers, storage1 and storage2. A peering between both has been established, and a very basic volume has been configured:
storage1:~# gluster
gluster> peer status
Number of Peers: 1
Hostname: storage2
Uuid: 2d22cc13-2252-4cf1-bfe9-3d27fa2fbc29
State: Peer in Cluster (Connected)
gluster> volume create data storage1:/srv/data storage2:/srv/data
...
gluster> volume start data
...
gluster> volume info
Volume Name: data
Type: Distribute
Volume ID: e2bd5767-4b33-4e57-9320-91ca76f52d56
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: storage1:/srv/data
Brick2: storage2:/srv/data
For the test setup, I populated the volume with a number of files.
To be save, stop the volume before you begin with the package upgrade:
gluster> volume stop data
And now perform your dist-upgrade.
After the upgrade, you will have to perform two manual clean ups. Both actions have to be performed on all storage servers.
The package maintainers have apparently neglected to take care of this one. You manually need to copy the old configuration files over.
storage1:~# cd /var/lib/glusterd && cp -r /etc/glusterd/* .
GlusterFS 3.5 requires the volume-id in an extended directory attribute. This is also not automatically handled during package upgrade.
storage1:~# vol=data
storage1:~# volid=$(grep volume-id /var/lib/glusterd/vols/$vol/info | cut -d= -f2 | sed 's/-//g')
storage1:~# setfattr -n trusted.glusterfs.volume-id -v 0x$volid /srv/data
With these two steps performed on all GlusterFS servers, you should now be able to start and mount your volume again in Debian Jessie.
Do not forget to explicitly stop the volume again before continueing with the next upgrade step.
After you have dist-upgraded to Stretch, there is yet another manual step you have to take to convert the volume metadata to the new layout in GlusterFS 3.8. Make sure you have stopped your volumes and the GlusterFS server.
storage1:~# service glusterfs-server stop
Now run the following command:
storage1:~# glusterd --xlator-option *.upgrade=on -N
Now you should be ready to start your volume again:
storage1:~# service glusterfs-server start
storage1:~# gluster
gluster> volume start data
And mount it:
client:~# mount -t glusterfs storage1:/data /mnt
You should now be running GlusterFS 3.8 and your files should still all be there.
]]>Braille displays come in various sizes. There are models tailored for desktop use (with 60 cells or more), models tailored for portable use with a laptop (usually with 40 cells), and, nowadays, there are even models tailored for on-the-go use with a smartphone or similar (with something like 14 or 18 cells).
Back in the old days, braille displays were rather massive. A 40-cell braille display was typically about the size of a 13” laptop. In modern times, manufacturers have managed to reduce the size of the internals such that a 40-cell display can be placed in front of a laptop or keyboard instead of placing the laptop on top of the braille display.
While this is a nice achievement, I personally haven’t found it to be very convenient because you now have to place two physically separate devices on your lap. It’s OK if you have a real desk, but, at least in my opinion, if you try to use your laptop as its name suggests, it’s actually inconvenient to use a small form factor, 40-cell display.
For this reason, I’ve been waiting for a long-promised new model in the Handy Tech Star series. In 2002, they released the Handy Tech Braille Star 40, which is a 40-cell braille display with enough space to put a laptop directly on top of it. To accommodate larger laptop models, they even built in a little platform at the back that can be pulled out to effectively enlarge the top surface. Handy Tech has now released a new model, the Active Star 40, that has essentially the same layout but modernized internals.
You can still pull out the little platform to increase the space that can be used to put something on top.
But, most conveniently, they’ve designed in an empty compartment, roughly the size of a modern smartphone, beneath the platform. The original idea was to actually put a smartphone inside, but this has turned out (at least to me) to not be very feasible. Fortunately, they thought about the need for electricity and added a Micro USB cable terminating within the newly created, empty compartment.
My first idea was to put a conventional Raspberry Pi inside. When I received the braille display, however, we immediately noticed that a standard-sized rpi is roughly 3mm too high to fit into the empty compartment.
Fortunately, though, a co-worker noticed that the Raspberry Pi Zero was available for order. The Raspberry Pi Zero is a lot thinner, and fits perfectly inside (actually, I think there’s enough space for two, or even three, of them). So we ordered one, along with some accessories like a 64GB SDHC card, a Bluetooth dongle, and a Micro USB adapter cable. The hardware arrived a few days later, and was immediately bootstrapped with the assistance of very helpful friends. It works like a charm!
The backside of the Handy Tech Active Star 40 features two USB host ports that can be used to connect devices such as a keyboard. A small form-factor, USB keyboard with a magnetic clip-on is included. When a USB keyboard is connected, and when the display is used via Bluetooth, the braille display firmware additionally offers the Bluetooth HID profile, and key press/release events received via the USB port are passed through to it.
I use the Bluetooth dongle for all my communication needs. Most importantly, BRLTTY is used as a console screen reader. It talks to the braille display via Bluetooth (more precisely, via an RFCOMM channel).
The keyboard connects through to Linux via the Bluetooth HID profile.
Now, all that is left is network connectivity. To keep the energy consumption as low as possible, I decided to go for Bluetooth PAN. It appears that the tethering mode of my mobile phone works (albeit with a quirk), so I can actually access the internet as long as I have cell phone reception. Additionally, I configured a Bluetooth PAN access point on my desktop machines at home and at work, so I can easily (and somewhat more reliably) get IP connectivity for the rpi when I’m near one of these machines. I plan to configure a classic Raspberry Pi as a mobile Bluetooth access point. It would essentially function as a Bluetooth to ethernet adapter, and should allow me to have network connectivity in places where I don’t want to use my phone.
It was a bit challenging to figure out how to actually configure Bluetooth PAN with BlueZ 5. I found the bt-pan python script (see below) to be the only way so far to configure PAN without a GUI.
It handles both ends of a PAN network, configuring a server and a client. Once instructed to do so (via D-Bus) in client mode, BlueZ will create a new network device - bnep0 - once a connection to a server has been established. Typically, DHCP is used to assign IP addresses for these interfaces. In server mode, BlueZ needs to know the name of a bridge device to which it can add a slave device for each incoming client connection. Configuring an address for the bridge device, as well as running a DHCP server + IP Masquerading on the bridge, is usually all you need to do.
I’m using systemd-networkd to configure the bridge device.
/etc/systemd/network/pan.netdev:
[NetDev]
Name=pan
Kind=bridge
ForwardDelaySec=0
/etc/systemd/network/pan.network:
[Match]
Name=pan
[Network]
Address=0.0.0.0/24
DHCPServer=yes
IPMasquerade=yes
Now, BlueZ needs to be told to configure a NAP profile. To my surprise, there seems to be no way to do this with stock BlueZ 5.36 utilities. Please correct me if I’m wrong.
Luckily, I found a very nice blog post, as well as an accommodating Python script that performs the required D-Bus calls.
For convenience, I use a Systemd service to invoke the script and to ensure that its dependencies are met.
/etc/systemd/system/pan.service:
[Unit]
Description=Bluetooth Personal Area Network
After=bluetooth.service systemd-networkd.service
Requires=systemd-networkd.service
PartOf=bluetooth.service
[Service]
Type=notify
ExecStart=/usr/local/sbin/pan
[Install]
WantedBy=bluetooth.target
/usr/local/sbin/pan:
#!/bin/sh
# Ugly hack to work around #787480
iptables -F
iptables -t nat -F
iptables -t mangle -F
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
exec /usr/local/sbin/bt-pan --systemd --debug server pan
This last file wouldn’t be necessary if IPMasquerade= were supported in Debian right now (see #787480).
After the obligatory systemctl daemon-reload and systemctl restart systemd-networkd, you can start your Bluetooth Personal Area Network with systemctl start pan.
Configuring the client is also quite easy to do with Systemd.
/etc/systemd/network/pan-client.network:
[Match]
Name=bnep*
[Network]
DHCP=yes
/etc/systemd/system/pan@.service:
[Unit]
Description=Bluetooth Personal Area Network client
[Service]
Type=notify
ExecStart=/usr/local/sbin/bt-pan --debug --systemd client %I --wait
Now, after the usual configuration reloading, you should be able to connect to a specific Bluetooth access point with:
systemctl start pan@00:11:22:33:44:55
Of course, the server and client-side service configuration require a pre-existing pairing between the server and each of its clients.
On the server, start bluetoothctl and issue the following commands:
power on
agent on
default-agent
scan on
scan off
pair XX:XX:XX:XX:XX:XX
trust XX:XX:XX:XX:XX:XX
Once you’ve set scan mode to on, wait a few seconds until you see the device you’re looking for scroll by. Note its device address, and use it for the pair and (optional) trust commands.
On the client, the sequence is essentially the same except that you don’t need to issue the trust command. The server needs to trust a client in order to accept NAP profile connections from it without waiting for manual confirmation by the user.
I’m actually not sure if this is the optimal sequence of commands. It might be enough to just pair the client with the server and issue the trust command on the server, but I haven’t tried this yet.
Essentially the same as above also needs to be done in order to use the Bluetooth HID profile of the Active Star 40 on Linux. However, instead of agent on, you need to issue the command agent KeyboardOnly. This explicitly tells bluetoothctl that you’re specifically looking for a HID profile.
While I’m very happy that I actually managed to set all of this up, I must admit that the command-line interface to BlueZ feels a bit incomplete and confusing. I initially thought that agents were only for PIN code entry. Now that I’ve discovered that “agent KeyboardOnly” is used to enable the HID profile, I’m not sure anymore. I’m surprised that I needed to grab a script from a random git repository in order to be able to set up PAN. I remember, with earlier version of BlueZ, that there was a tool called pand that you could use to do all of this from the command-line. I don’t seem to see anything like that for BlueZ 5 anymore. Maybe I’m missing something obvious?
The data rate is roughly 120kB/s, which I consider acceptable for such a low power solution. The 1GHz ARM CPU actually feels sufficiently fast for a console/text-mode person like me. I’ll rarely be using much more than ssh and emacs on it anyway.
The default dimensions of the framebuffer on the Raspberry Pi Zero are a bit unexpectedly strange. fbset reports that the screen dimension is 656x416 pixels (of course, no monitor connected). With a typical console font of 8x16, I got 82 columns and 26 lines.
With a 40 cell braille display, the 82 columns are very inconvenient. Additionally, as a braille user, I would like to be able to view Unicode braille characters in addition to the normal charset on the console. Fortunately, Linux supports 512 glyphs, while most console fonts do only provide 256. console-setup can load and combine two 256-glyph fonts at once. So I added the following to /etc/default/console-setup to make the text console a lot more friendly to braille users:
SCREEN_WIDTH=80
SCREEN_HEIGHT=25
FONT="Lat15-Terminus16.psf.gz brl-16x8.psf"
Note
You need console-braille installed for brl-16x8.psf to be available.
There’s a 3.5mm audio jack inside the braille display as well. Unfortunately, there are no converters from Mini-HDMI to 3.5mm audio that I know of. It would be very nice to be able to use the sound card that is already built into the Raspberry Pi Zero, but, unfortunately, this doesn’t seem possible at the moment. Alternatively, I’m looking at using a Micro USB OTG hub and an additional USB audio adapter to get sound from the Raspberry Pi Zero to the braille display’s speakers. Unfortunately, the two USB audio adapters I’ve tried so far have run hot for some unknown reason. So I have to find some other chipset to see if the problem goes away.
A little nuisance, currently, is that you need to manually power off the Raspberry, wait a few seconds, and then power down the braille display. Turning the braille display off cuts power delivery via the internal USB port. If this is accidentally done too soon then the Raspberry Pi Zero is shut down ungracefully (which is probably not the best way to do it). We’re looking into connecting a small, buffering battery to the GPIO pins of the rpi, and into notifying the rpi when external power has dropped. A graceful, software-initiated shutdown can then be performed. You can think of it as being like a mini UPS for Micro USB.
If you are a happy owner of a Handy Tech Active Star 40 and would like to do something similar, I am happy to share my current (Raspbian Stretch based) image. In fact, if there is enough interest by other blind users, we might even consider putting a kit together that makes it as easy as possible for you to get started. Let me know if this could be of interest to you.
Thanks to Dave Mielke for reviewing the text of this posting.
Thanks to Simon Kainz for making the photos for this article.
And I owe a big thank you to my co-workers at Graz University of Technology who have helped me a lot to bootstrap really quickly into the rpi world.
My first tweet about this topic is just five days ago, and apart from the soundcard not working yet, I feel like the project is already almost complete! By the way, I am editing the final version of this blog posting from my newly created monitorless ARM-based Linux laptop via an ssh connection to my home machine.
]]>Research It uses XQuery (actually, XQilla) to do all the heavy lifting. This also means that the Research It Rulesets are theoretically also useable on other platforms. I was immediately hooked, because I always had a love for XPath. Looking at XQuery code is totally self-explanatory for me. I just like the syntax and semantics.
So I immediately checked out XQilla on Debian, and found #821329 and #821330, which were promptly fixed by Tommi Vainikainen, thanks to him for the really quick response!
Unfortunately, making xqilla:parse-html available and upgrading to the latest upstream version is not enough to use XQilla on Linux with the typical webpages out there. Xerces-C++, which is what XQilla uses to fetch web resources, does not support HTTPS URLs at the moment. I filed #821380 to ask for HTTPS support in Xerces-C to be enabled by default.
And even with HTTPS support enabled in Xerces-C, the xqilla:parse-html function (which is based on HTML Tidy) fails for a lot of real-world webpages I tried. Manually upgrading the six year old version of HTML Tidy in Debian to the latest from GitHub (tidy-html5, #810951) did not help a lot either.
XQuery is still a very nice language for extracting information from markup documents. XQilla just has a bit of a hard time dealing with the typical HTML documents out there. After all, it was designed to deal with well-formed XML documents.
So I decided to build a little wrapper around XQilla which fetches the web resources with the Python Requests package, and cleans the HTML document with BeautifulSoup (which uses lxml to do HTML parsing). The output of BeautifulSoup can apparently be passed to XQilla as the context document. This is a fairly crazy hack, but it works quite reliably so far.
Here is how one of my web scraping rules looks like:
from click import argument, group
@group()
def xq():
"""Web scraping for command-line users."""
pass
@xq.group('github.com')
def github():
"""Quick access to github.com."""
pass
@github.command('code_search')
@argument('language')
@argument('query')
def github_code_search(language, query):
"""Search for source code."""
='https://github.com/search',
scrape(get={'l': language, 'q': query, 'type': 'code'}) params
The function scrape automatically determines the XQuery filename according to the callers function name. Here is how github_code_search.xq looks like:
declare function local:source-lines($table as node()*) as xs:string*
{
for $tr in $table/tr return normalize-space(data($tr))
};
let $results := html//div[@id="code_search_results"]/div[@class="code-list"]
for $div in $results/div
let $repo := data($div/p/a[1])
let $file := data($div/p/a[2])
let $link := resolve-uri(data($div/p/a[2]/@href))
return (concat($repo, ": ", $file), $link, local:source-lines($div//table),
"---------------------------------------------------------------")
That is all I need to implement a custom web scraping rule. A few lines of Python to specify how and where to fetch the website from. And a XQuery file that specifies how to mangle the document content.
And thanks to the Python click package, the various entry points of my web scraping script can easily be called from the command-line.
Here is a sample invokation:
fx:~/xq% ./xq.py github.com
Usage: xq.py github.com [OPTIONS] COMMAND [ARGS]...
Quick access to github.com.
Options:
--help Show this message and exit.
Commands:
code_search Search for source code.
fx:~/xq% ./xq.py github.com code_search Pascal '"debian/rules"'
prof7bit/LazPackager: frmlazpackageroptionsdeb.pas
https://github.com/prof7bit/LazPackager/blob/cc3e35e9bae0c5a582b0b301dcbb38047fba2ad9/frmlazpackageroptionsdeb.pas
230 procedure TFDebianOptions.BtnPreviewRulesClick(Sender: TObject);
231 begin
232 ShowPreview('debian/rules', EdRules.Text);
233 end;
234
235 procedure TFDebianOptions.BtnPreviewChangelogClick(Sender: TObject);
---------------------------------------------------------------
prof7bit/LazPackager: lazpackagerdebian.pas
https://github.com/prof7bit/LazPackager/blob/cc3e35e9bae0c5a582b0b301dcbb38047fba2ad9/lazpackagerdebian.pas
205 + 'mv ../rules debian/' + LF
206 + 'chmod +x debian/rules' + LF
207 + 'mv ../changelog debian/' + LF
208 + 'mv ../copyright debian/' + LF
---------------------------------------------------------------
For the impatient, here is the implementation of `scrape`:
from bs4 import BeautifulSoup
from bs4.element import Doctype, ResultSet
from inspect import currentframe
from itertools import chain
from os import path
from os.path import abspath, dirname
from subprocess import PIPE, run
from tempfile import NamedTemporaryFile
import requests
def scrape(get=None, post=None, find_all=None,
=None, xquery_vars={}, **kwargs):
xquery_name"""Execute a XQuery file.
When either get or post is specified, fetch the resource and run it through
BeautifulSoup, passing it as context to the XQuery.
If find_all is given, wrap the result of executing find_all on
the BeautifulSoup in an artificial HTML body.
If xquery_name is not specified, the callers function name is used.
xquery_name combined with extension ".xq" is searched in the directory
where this Python script resides and executed with XQilla.
kwargs are passed to get or post calls. Typical extra keywords would be:
params -- To pass extra parameters to the URL.
data -- For HTTP POST.
"""
= None
response = None
url = None
context
if get is not None:
= requests.get(get, **kwargs)
response elif post is not None:
= requests.post(post, **kwargs)
response
if response is not None:
response.raise_for_status()= BeautifulSoup(response.text, 'lxml')
context = next(context.descendants)
dtd if type(dtd) is Doctype:
dtd.extract()if find_all is not None:
= context.find_all(find_all)
context = response.url
url
if xquery_name is None:
= currentframe().f_back.f_code.co_name
xquery_name = ['xqilla']
cmd if context is not None:
if type(context) is BeautifulSoup:
= context
soup = NamedTemporaryFile(mode='w')
context print(soup, file=context)
'-i', context.name])
cmd.extend([elif isinstance(context, list) or isinstance(context, ResultSet):
= context
tags = NamedTemporaryFile(mode='w')
context print('<html><body>', file=context)
for item in tags: print(item, file=context)
print('</body></html>', file=context)
context.flush()'-i', context.name])
cmd.extend(['-v', k, v] for k, v in xquery_vars.items()))
cmd.extend(chain.from_iterable([if url is not None:
'-b', url])
cmd.extend([__file__), xquery_name + ".xq")))
cmd.append(abspath(path.join(dirname(
= run(cmd, stdout=PIPE).stdout.decode('utf-8')
output if type(context) is NamedTemporaryFile: context.close()
print(output, end='')
The full source for xq can be found on GitHub. The project is just two days old, so I have only implemented three scraping rules as of now. However, adding new rules has been made deliberately easy, so that I can just write up a few lines of code whenever I find something on the web which I’d like to scrape on the command-line. If you find this “framework” useful, make sure to share your insights with me. And if you impelement your own scraping rules for a public service, consider sharing that as well.
If you have an comments or questions, send me mail. Oh, and by the way, I am now also on Twitter as @blindbird23.
]]>So to learn something new, and to keep control over generated code, I started to investigate what it would take to write my own little custom data binding compiler.
It turns out that there are two very helpful libraries in Python which can really make your life a lot easier:
- The DTD class from lxml.etree.
- The Jinja2 templating system.
To keep my life simple, I am focusing on generating accessors for XML attributes only for now. I leave it up to the library client to figure out how to deal with child elements.
Inspired by the hybrid example from libstudxml, we define a simple base class that can store raw XML elements.
class element {
public:
using attributes_type = std::map<xml::qname, std::string>;
using elements_type = std::vector<std::shared_ptr<element>>;
(const xml::qname& name) : tag_name_(name) {}
elementvirtual ~element() = default;
::qname const& tag_name() const { return tag_name_; }
xml
attributes_type const& attributes() const { return attributes_; }
attributes_type& attributes() { return attributes_; }
std::string const& text() const { return text_; }
void text(std::string const& text) { text_ = text; }
elements_type const& elements() const {return elements_;}
elements_type& elements() { return elements_; }
(xml::parser&, bool start_end = true);
element
void serialize (xml::serializer&, bool start_end = true) const;
template<typename T> static std::shared_ptr<element> create(xml::parser& p) {
return std::make_shared<T>(p, false);
}
private:
::qname tag_name_;
xmlattributes_type attributes_;
std::string text_; // Simple content only.
elements_type elements_; // Complex content only.
};
For each element name in the DTD, we’re going to define a class that inherits
from the element
class, implementing special methods to make attribute
access easier. The element(xml::parser&)
constructor is going to create the
corresponding class whenever it sees a certain element name. This calls for some
sort of factory:
class factory {
public:
static std::shared_ptr<element> make(xml::parser& p);
protected:
struct element_info {
::content content_type;
xmlstd::shared_ptr<element> (*construct)(xml::parser&);
};
using map_type = std::map<xml::qname, element_info>;
static map_type *get_map() {
if (!map) map = new map_type;
return map;
}
private:
static map_type *map;
};
template<typename T>
struct register_element : factory {
(xml::qname const& name, xml::content const& content) {
register_element()->insert({name, element_info{content, &element::create<T>}});
get_map}
};
<element> factory::make(xml::parser& p) {
shared_ptrauto name = p.qname();
auto iter = get_map()->find(name);
if (iter == get_map()->end()) {
// No subclass found, so store plain data so we do not loose on roundtrip.
return std::make_shared<element>(p, false);
}
auto const& element = iter->second;
.content(element.content_type);
preturn element.create(p);
}
Now that we have our required infrastructure, we can finally start writing Jinja2 templates to generate classes for all elements in our DTD:
{%- for elem in dtd.iterelements() %}
{%- if elem.name in forwards_for %}
{%- for forward in forwards_for[elem.name] %}
class {{forward}};
{%- endfor %}
{%- endif %}
class {{elem.name}} : public dom::element {
static register_element<{{elem.name}}> factory_registration;
public:
{{elem.name}}(xml::parser& p, bool start_end = true) : dom::element(p, start_end) {
}
{%- for attr in elem.iterattributes() %}
{%- if attr is required_string_attribute %}
std::string {{attr.name}}() const;
void {{attr.name}}(std::string const&);
{%- elif attr is implied_string_attribute %}
optional<std::string> {{attr.name}}() const;
void {{attr.name}}(optional<std::string>);
{# more branches to go here #}
{%- endif %}
{%- endfor %}
};
{%- endfor %}
required_string_attribute
and implied_string_attribute
are so-called
Jinja2 tests. They are a nice way to isolate predicates such that the
Jinja2 templates can stay relatively free of complicated expressions:
'required_string_attribute'] = lambda a: \
templates.tests[type in ['id', 'cdata', 'idref'] and a.default == 'required'
a.'implied_string_attribute'] = lambda a: \
templates.tests[type in ['id', 'cdata', 'idref'] and a.default == 'implied' a.
That is nice, but we have only seen C++ header declarations so far. Lets have a look into the implementation of some of our attribute accessors.
One interesting aspect of DTD based code generation is the fact that attributes can have enumerations specified. Assume that we have some extra data-structure in Python which helps us to define a nice name for each individual enumeration attribute. Then, a part of the Jinja2 template to generate the implementation for an enumeration attribute looks like:
{%- elif attr is known_enumeration_attribute %}
{%- set enum = enumerations[tuple(attr.values())]['name'] %}
{%- if attr.default == 'required' %}
{{enum}} {{elem.name}}::{{attr.name}}() const {
auto iter = attributes().find(qname{"{{attr.name}}"});
if (iter != attributes().end()) {
{%- for value in attr.values() %}
{% if not loop.first %}else {% else %} {% endif -%}
if (iter->second == "{{value}}") return {{enum}}::{{value | mangle}};
{%- endfor %}
throw illegal_enumeration{};
}
throw missing_attribute{};
}
void {{elem.name}}::{{attr.name}}({{enum}} value) {
static qname const attr{"{{attr.name}}"};
switch (value) {
{%- for value in attr.values() %}
case {{enum}}::{{value | mangle}}:
attributes()[attr] = "{{value}}";
break;
{%- endfor %}
default:
throw illegal_enumeration{};
}
}
{%- elif attr.default == 'implied' %}
{# similar implementation using boost::optional #}
{%- endif %}
{%- endif %}
The header for the library is generated like this:
from jinja2 import DictLoader, Environment
from lxml.etree import DTD
= """
LIBRARY_HEADER {# Our template code #}
"""
= DTD('bmml.dtd')
bmml = Environment(loader=DictLoader(globals()))
templates
'mangle'] = lambda ident: \
templates.filters['8th_or_128th': 'eighth_or_128th',
{'256th': 'twohundredfiftysixth',
'continue': 'continue_'
}.get(ident, ident)
def template(name):
return templates.get_template(name)
def hpp():
print(template('LIBRARY_HEADER').render(
'dtd': bmml,
{'enumerations': enumerations,
'forwards_for': {'ornament': ['ornament_type'],
'score': ['score_data', 'score_header']}
}))
With all of this in place, we can have a look at a small use case for our library.
I haven’t really explained anything about the document format we’re working with until now. Braille Music Markup Language is an XML based plain text markup language. Its purpose is to be able to enhance plain braille music scores with usually hard-to-calcuate meta information. Almost all element text content is supposed to be printed as-is to reconstruct the original plain text.
So we could at least define one very basic operation in our library: printing the plain text content of an element.
I found an XML stylesheet that is supposed to convert BMML documents to HTML.
This stylesheet apparently has a bug, insofar as it forgets to treat the
rest_data
element in the same way as it already treats the note_data
element.
note to self, I wish I would’ve done a code review before the EU-project that developed BMML was finished. It looks like resurrecting maintainance is one of the things I might be able to look into in a meeting in Pisa in the first three days of March this year.
If we keep this in mind, we can easily reimplement what the stylesheet does in idiomatic C++:
template<typename T>
typename std::enable_if<std::is_base_of<element, T>::value, std::ostream&>::type
operator<<(std::ostream &out, std::shared_ptr<T> elem) {
if (!std::dynamic_pointer_cast<note_data>(elem) &&
!std::dynamic_pointer_cast<rest_data>(elem) &&
!std::dynamic_pointer_cast<score_header>(elem))
{
auto const& text = elem->text();
if (text.empty()) for (auto child : *elem) out << child; else out << text;
}
return out;
}
The use of std::enable_if
is necessary here so that operator<<
is defined
on the element
class and all of its subclasses. Without the
std::enable_if
magic, client code would be forced to manually make sure
it is passing std::shared_ptr<element>
each time it wants to use the
operatr<<
on any of our specially defined subclasses.
Now we can easily print BMML documents and get their actual plain text representation.
#include <fstream>
#include <iostream>
#include <xml/parser>
#include <xml/serializer>
#include "bmml.hxx"
using namespace std;
using namespace xml;
int main (int argc, char *argv[]) {
if (argc < 2) {
<< "usage: " << argv[0] << " [<filename.bmml>...]" << endl;
cerr return EXIT_FAILURE;
}
try {
for (int i = 1; i < argc; ++i) {
{argv[i]};
ifstream ifs
if (ifs.good()) {
{ifs, argv[i]};
parser p
.next_expect(parser::start_element, "score", content::complex);
p<< make_shared<bmml::score>(p, false) << endl;
cout .next_expect(parser::end_element, "score");
p} else {
<< "Unable to open '" << argv[i] << "'." << endl;
cerr return EXIT_FAILURE;
}
}
} catch (xml::exception const& e) {
<< e.what() << endl;
cerr return EXIT_FAILURE;
}
}
That’s it for now. The full source for the actual library which inspired this posting can be found on github in my bmmlcxx project.
If you have an comments or questions, send me mail. If you like bmmlcxx, don’t forget to star it :-).
]]>Not everything I’ve had to implement until now was actually pretty. I spent yesterday evening implementing accidentals handling, which turned out to be quite a mess. However, I wanted to share my definition of the circle of fifths, because I find it rather concise.
Given a key signature (often expressed as the number of sharp or flat accidentals), tell which pitch classes are actually raised/lowered.
While reading through music notation software, I have seen several implementations of this basic concept. However, I have never seen one which was so concise.
module Accidental where
import Data.Map (Map)
import qualified Data.Map as Map (fromList)
import qualified Haskore.Basic.Pitch as Pitch
fifths n| n > 0 = let [a,b,c,d,e,f,g] = fifths (n-1) in [d,e,f,g+1,a,b,c]
| n < 0 = let [a,b,c,d,e,f,g] = fifths (n+1) in [e,f,g,a,b,c,d-1]
| otherwise = replicate 7 0
Given this, we can easily define a Map of pitches to currently active accidentals/alterations. List comprehension to the rescue!
accidentals :: Int -> Map Pitch.T Pitch.Relative
= Map.fromList [ ((o, c), a)
accidentals k | o <- [0..maxOctave]
<- zip diatonicSteps $ fifths k
, (c, a) /= 0
, a where
] = 9
maxOctave = [Pitch.C, Pitch.D, Pitch.E, Pitch.F, Pitch.G,
diatonicSteps Pitch.A, Pitch.B]
The full source code for the haskore-braille (WIP) package can be found on GitHub.
If you have any comments regarding the implementation, please drop me a mail.
]]>If you don’t want to watch the video, here is the excerpt I am talking about:
]]>One of the things that Cambridge could do, and later Bell Labs could do, is somehow raise peoples expectations of themselves. Raise the level that is considered acceptable. You walk in and you see what people are doing, you see how people are doing, you see how apparently easily they do it, and you see how nice they are while doing it, and you realize, I better sharpen up my game. This is something where you have to, you just have to get better. Because, what is acceptable has changed. And some organisations can do that, and well, most can’t, to that extent. And I am very very lucky to be in a couple places that actually can increase your level of ambition, in some sense, level of what is a good standard.
#include <QApplication>
#include <QTextEdit>
int main(int argv, char **args)
{
QApplication app(argv, args);
QTextEdit textEdit;
.setText(u8"\u28FF");
textEdit.show();
textEdit
return app.exec();
}
(compile with -std=c++11).
On my system, this “application” does not show the correct glyph always. Sometimes, it renders a a white square with black border, i.e., the symbol for unknown glyph. However, if I invoke the same executable several times, sometimes, it renders the glyph correctly.
In other words: The glyph choosing mechansim is apparently non-deterministic!!!
UPDATE: Sune Vuorela figured out that I need to set QT_HARFBUZZ=old in the environment for this bug to go away. Apparently, harfbuzz-ng from Qt 5.3 is buggy.
]]>I used CodeSynthesis XSD to generate a rather complete object model for MusicXML 3.0 documents. Some of the classes needed a bit of manual adjustment, to make the client API really nice and tidy.
During the process, I have learnt (as is almost always the case when programming) quite a lot. I have to say, once you got the hang of it, CodeSynthesis XSD is really a very powerful tool. I definitely prefer having these 100k lines of code auto-generated from a XML Schema, instead of having to implement small parts of it by hand.
If you are into MusicXML for any reason, and you like C++, give this library a whirl. At least to me, it is what I was always looking for: Rather type-safe, with a quite self-explanatory API.
For added ease of integration, xsdcxx-musicxml is sub-project friendly. In other words, if your project uses CMake and Git, adding xsdcxx-musicxml as a subproject is as easy as using git submodule add and putting add_subdirectory(xsdcxx-musicxml) into your CMakeLists.txt.
Finally, if you want to see how this library can be put to use: The MusicXML export functionality of BMC is all in one C++ source file: musicxml.cpp.
]]>We want to spread work amongst all available CPU cores. There are no dependencies between items in our work queue. So every thread can just pick up and process an item as soon as it is ready.
This simple implementation makes use of C++11 threading primitives, lambda functions and move semantics. The idea is simple: You provide a function at construction time which defines how to process one item of work. To pass work to the queue, simply call the function operator of the object, repeatedly. When the destructor is called (once the object reachs the end of its scope), all remaining items are processed and all background threads are joined.
The number of threads defaults to the value of std::thread::hardware_concurrency(). This appears to work at least since GCC 4.9. Earlier tests have shown that std::thread::hardware_concurrency() always returned 1. I don’t know when exactly GCC (or libstdc++, actually) started to support this, but at least since GCC 4.9, it is usable. Prerequisite on Linux is a mounted /proc.
The number of maximum items per thread in the queue defaults to 1. If the queue is full, calls to the function operator will block.
So the most basic usage example is probably something like:
int main() {
typedef std::string item_type;
<item_type> process([](item_type &item) {
distributor// do work
});
while (/* input */) process(std::move(/* item */));
return 0;
}
That is about as simple as it can get, IMHO.
The code can be found in the GitHub project mentioned above. However, since the class template is relatively short, here it is.
#include <condition_variable>
#include <mutex>
#include <queue>
#include <stdexcept>
#include <thread>
#include <vector>
template <typename Type, typename Queue = std::queue<Type>>
class distributor: Queue, std::mutex, std::condition_variable {
typename Queue::size_type capacity;
bool done = false;
std::vector<std::thread> threads;
public:
template<typename Function>
( Function function
distributor, unsigned int concurrency = std::thread::hardware_concurrency()
, typename Queue::size_type max_items_per_thread = 1
)
: capacity{concurrency * max_items_per_thread}
{
if (not concurrency)
throw std::invalid_argument("Concurrency must be non-zero");
if (not max_items_per_thread)
throw std::invalid_argument("Max items per thread must be non-zero");
for (unsigned int count {0}; count < concurrency; count += 1)
.emplace_back(static_cast<void (distributor::*)(Function)>
threads(&distributor::consume), this, function);
}
(distributor &&) = default;
distributor&operator=(distributor &&) = delete;
distributor
~distributor()
{
{
std::lock_guard<std::mutex> guard(*this);
= true;
done ();
notify_all}
for (auto &&thread: threads) thread.join();
}
void operator()(Type &&value)
{
std::unique_lock<std::mutex> lock(*this);
while (Queue::size() == capacity) wait(lock);
::push(std::forward<Type>(value));
Queue();
notify_one}
private:
template <typename Function>
void consume(Function process)
{
std::unique_lock<std::mutex> lock(*this);
while (true) {
if (not Queue::empty()) {
{ std::move(Queue::front()) };
Type item ::pop();
Queue();
notify_one.unlock();
lock(item);
process.lock();
lock} else if (done) {
break;
} else {
(lock);
wait}
}
}
};
If you have any comments regarding the implementation, please drop me a mail.
]]>Now, exercism has recently gained a C++ track. That track is particularily fun, because it is based on C++11, Boost, and CMake. Things that are quite standard to C++ development these days. And the use of C++11 and Boost makes some solutions really shine.
]]>That means you can use “M-x list-packages RET” to install them in GNU Emacs 24.
In 2007, I wrote OSC server and client support for Emacs. I used it back then to communicate with SuperCollider and related software.
osc.el is a rather simple package with no user visible functionality, as it only provides a library for Emacs Lisp programmers.
It is probably most interesting to people wanting to remote-control (modern) sound related software from with Emacs Lisp.
As my interest in poker has recently sparked again, one thing led to another, so I began to write a poker library for GNU Emacs. It was a very fun experience.
Version 0.1 of poker.el can simulate a table of ten players. Bots do make their own decisions, although the bot code is very simple. The complete game is currently played in the minibuffer. So there is definitely room for user interface enhancements, such as a poker table mode for displaying a table in a buffer.
I started to write metar.el in 2007 as well, but never really finished it to a releaseable state. I use it personally rather often, but never cleaned it up for a release. This has changed.
It plugs in with other GNU Emacs features that make use of your current location. In particular, “M-x sunrise-sunset” and “M-x phases-of-moon” use the same variables (calendar-latitude and calendar-longitude) to determine where you are. “M-x metar” will determine the nearest airport weather station and display the weather information provided by that station.
Finally, after many many years of development separated by uncountable amounts of hiatus, chess.el is now out as version 2.0.3!
]]>It doesn’t happen very often, but App programmers in the iOS Universe do indeed sometimes think about Accessibility support, and the APIs provided by Apple are useful enough to allow programmers to write very accessible apps. You can say about Apple whatever you want, currently, it is the company providing the best accessibility support on the market. Why? Because they made accessibility a first-class citizen of their platform(s). This is where policy helps. If you can dictate top-down that you support people with disabilities, things actually start to happen. If you have to ask, hope, and wait, like it is with free software, things do not really progress as fast as the users need it.
Back to THETA Poker Pro: The default configuration is already very useable with VoiceOver. However, if you want the cards placed on the board announced to you, so that you do not have to discover them manually by touch, you can enable the “Card Announcement” item in the Options menu. You can also set message delay a bit slower, such that all messages are actually fully spoken and not cut off sometimes. With these two settings adjusted, and maybe “Animation” set to “Very fast”, the game feels extremely nice. There is actually nothing I would want to change, which does not happen very often when I test a program for its accessibility.
With these settings changed, game play is very smooth with VoiceOver, you basically just have to tap your cards to check, tap the deck to fold, or tap your chips to raise. Very simple, and these three “buttons” are on the bottom of the screen, so rather easy and quick to find. All other activity is automatically announced by VoiceOver.
I have played a few hundred hands already with this App. It is a wonderful way to pass time. For instance, I don’t like to go to my doctor, because I usually wait up to two or three hours. I had to pay her a visit on monday. While waiting, I played “a few” hands, and suddenly, I was already called in. When I came out again, I checked the time and was rather surprised that yes, I have waited two hours again, but this time, I didn’t notice! :-)
Special thanks go to the author(s) of this app. It is a good example of an App that was not specially made for the blind, but which feels like it was. Thanks, you’ve made my week!
OTOH, it makes me sad when I think about my beloved Linux platform and GUI accessibility. We are stuck since 2004 with a bit of desktop support + a half-working Firefox. During the D-Bus rewrite, quality of GUI accessibility has dropped so much that I had to take time off from linux gui accessibility to stay sane. It is back to where it was in 2006, yay, but we haven’t made a lot of real progress in the last 8 years. Granted, firefox has improved, but to my taste, not enough. I still do all my email, shell work, programming and some other things on Linux of course, but I notice that I do more and more casual stuff on iOS, it is just sooo much more useable. I do almost all my surfing with mobile safari, because it just works. Firefox works sometimes, and some other times working with it feels so slow that I am actually getting angry.
The scratch-your-own-itch philosophy combined with a very small margin group is poison for success. We’d need much more funding, and people working actively on this stuff as their day job, if we ever want to be competitive with existing solutions.
]]>All this has been made possible by gnuplots ability to generate text plots, and Manoj’s willingness to implement it during his term as project secretary. Thanks to Gnuplot and Manoj, and thanks to the current secretary for keeping this feature, it is (at least to me) a very nice to have, and actually makes Debian rather unique.
I personally don’t know of any other major projects which provide text graphs. We are indeed setting a very good example here. It would be nice if other projects would adopt this as well. This is bridging the digital divide for me.
For completeness sake I should probably mention that text graphs are not an universal solution for blind users. Those of us who do not use braille will probably have a very hard time extracting any meaningful information from this ASCII character salad. But to a braille user used to reading two dimensional information from the screen, it does actually work rather well. Some solutions from the 90s, when people didn’t have graphical terminals readily available everywhere, are still very good accessibility workarounds.
]]>he.net
for its DNS management interface.
It was dead simple, and therefore accessible.
All they had was a basic textarea with BIND alike configuration in it.
I could log in to their admin interface and change my DNS records as desired.
A few years ago, they auto-upgraded my account to their new shiny DNS panel, which, surprise surprise, is no longer accessible with a simple text browser. After a bit of bitching with support, they ended up downgrading my account back to the old functionality, so I was happy again. However, as you might guess, last time I needed to change a DNS record, I found that the DNS panel has been ugpraded yet again and is again no longer accessible to me.
So it was time to leave the sinking ship. But I needed to find an accessible DNS hosting service. Not an easy task, given that everyone seems to do more or less the same thing these days.
After a bit of web searching it became apparent that most offers these days are not what I want. I want a simple interface without any danger of accessibility issues. In most cases, you can not test the DNS management interface before signup. After a few dead ends, I took a step back and said to myself: “So, what is it that I am actually looking for? If this were a wishlist item, how would I like my workflow to be?” And the answer came immediately: “I want my zonefiles in a git repo!”
So I decided to turn my search upside down and search exactly for that. And guess what, I found exactly what I was looking for: LuaDNS.
LuaDNS has 5 nameservers in Europe, Asia and North America. As the name implies, it offers a way to write your zone files with Lua. This can be quite helpful for programmatically generating zones. However, it also supports BIND alike zone files, which is what I use.
The idea is simple: You create a Git repository on GitHub or BitBucket and let LuaDNS know where it is. A web hook can be setup to automatically trigger zone rebuilds once you push to your repository.
So all my accessibility problems around DNS hosting are suddenly completely gone. Once I edited/commited my zone files and pushed to my repository, LuaDNS will automatically pull from the repository and update my zones.
And I will never have to fight with an inaccessible web interface again. That said, LuaDNS has a web interface for administering account settings. It works very nice with Lynx. I hope they keep it that way.
There are two things I don’t particularily like about LuaDNS currently:
The team is friendly and was very fast to react on a question via email. Looks good, I’ll stay.
Now that I think of it, this article might be considered an answer to Steve Kemps question what would you pay for: I’d pay for a VCS based DNS hosting solution that allows me to use DNSSEC, if its web interface were kept clean and simple and therefore accessible. However, I don’t mind a free account for low volume usage at all. Especially if that makes it easy to test the service and make sure it works as expected.
]]>Unfortunately, I neglected to collect notes about how I did it manually. However, Linux 3.2 is getting a bit old, so I finally wanted to replace my manual boot configuration with something handled by the package system.
In case you don’t have /boot/efi
in /etc/fstab
yet,
you need to mount /dev/sda1
on /boot/efi
for the following to work.
The documentation I found on the net suggested to just
reinstall grub-efi-amd64
and everything should work.
That is not quite true. When I do
# apt-get install --reinstall grub-efi-amd64
Nothing changes in /boot/efi
.
I sort of expected that /boot/efi/EFI/debian
would be
created, and the EFI image should be placed in there. However,
that did not happen. Why is that?
It turns out that when I installed grub-efi-amd64
manually in 2012, I
created /boot/efi/EFI/boot/bootx64.efi
which is the EFI fallback location,
and apparently exactly what I want on this MacBook which does not support
multiple boot options.
Matthew Garrett posted an interesting article called
Booting with EFI which sheds light on this issue, go and read it.
Looking at /var/lib/dpkg/info/grub-efi-amd64.postinst
revealed
that /boot/efi/EFI/debian
needs to be created manually first.
If this directory does not exist, grub-efi-amd64
basically
does nothing on reinstall.
Running grub-install
will actually create a new EFI image.
However, it is being created in the wrong place for this machine.
# grub-install --target=x86_64-efi
does the trick. Now /boot/efi/EFI/debian/grubx64.efi
gets
created. However, since I don’t want to make GRUB the default,
there is yet another manual step to do:
# cp /boot/efi/EFI/debian/grubx64.efi /boot/efi/EFI/boot/bootx64.efi
Now I can select EFI Boot after pressing the Option key during startup. GRUB is loaded and Linux 3.13 gets booted. Strike!
Looking more closely reveals that there is actually a way to
tell grub-install
that it should install to
the fallback location directly. The --removable
option does that.
For the faint of heart, what does grub-install
actually do
on an EFI system? It does not directly write to the disk, therefore
it does not need a device specified. It looks for files in
/boot/efi
and assumes the EFI partition is mounted there.
So for my use case, the correct way to upgrade to a current GRUB EFI image should have been:
# grub-install --removable --target=x86_64-efi
Meanwhile I’ve been made aware of Bug#708430.
I guess it would be nice to have an option in /etc/default/grub
which would indicate that installation to the fallback location
is desired. While this is a rather ugly hack to work around a stupid
limitation, it is still what I’d like on this MacBook. At least
since I don’t have a triplle boot situation. Fallback location works
fine with just two OSes coexisting.
This works fine as long as I have a good SSH terminal on a desktop or laptop computer. However, it does not work very well on tablets or smaller mobile devices.
Additionally, mail splitting has become a performance burden over the years. I do not want to wait for Gnus to sort incoming mail into different folders while checking for new mail. That is something which should have been done in the background already, and thats what we are going to cover with the setup described below.
So I had to change my simple setup to accomodate for the new trends in mobile computing.
The obvious core of such a setup is an IMAP server which receives and stores your mail such that different clients can access it. So the time of my Gnus nnml storage are definitely over. Mail is no longer stored in and by my mail client.
While there are several IMAP server solutions out there, I find Dovecot fits my needs quite nicely.
I’ve decided to store my mail in Maildir format in ~/Maildir
.
I prefer storing data like that in the home directory to avoid
having to backup separate files from /var/
.
Maildir also features some index files which should help
performance in the long run.
Incoming mail will solely be delivered by fetchmail
and should
be checked for spam. While I can probably configure Exim to run SpamAssassin
on mails before delivering them to Dovecot, there is a much more
elegant solution: the dovecot local delivery agent (LDA).
/usr/lib/dovecot/deliver
takes mail from standard input and performs
Sieve filtering and updates the mail indexes. We will call
this executable more or less directly from fetchmail
.
Incoming mail from mailing lists will be sorted into different folders using Sieve. Dovecot needs to be told to enable the sieve plugin and to create new folders on demand.
/etc/dovecot/local.conf
:
disable_plaintext_auth = yes
mail_location = maildir:~/Maildir
lda_mailbox_autocreate = yes
lda_mailbox_autosubscribe = yes
protocol lda {
mail_plugins = sieve
}
Sieve scripts are actually quite intuitive once you have a template to start from.
~/.dovecot.sieve
:
require "fileinto";
if exists "X-Spam-Flag" {
# Store spam tagged by SpamAssassin into dedicated Spam folder
if header :contains "X-Spam-Flag" "YES" {
fileinto "Spam";
}
} elsif exists "X-Cron-Env" {
# Store mails from Cron daemon in dedicated folder
fileinto "cron";
} elsif exists "List-Id" {
# File list-mail into dedicated folders, matching on List-Id
if header :contains "List-Id" "boost-users.lists.boost.org" {
fileinto "boost-users";
} elsif header :contains "List-Id" "brltty.mielke.cc" {
fileinto "brltty";
} elsif header :contains "List-Id" "debian-accessibility.lists.debian.org" {
fileinto "debian-accessibility";
} elsif header :contains "List-Id" "debian-devel-announce.lists.debian.org" {
fileinto "debian-devel-announce";
} elsif header :contains "List-Id" "debian-devel.lists.debian.org" {
fileinto "debian-devel";
} elsif header :contains "List-Id" "spirit-general.lists.sourceforge.net" {
fileinto "spirit-general";
}
# ...
}
Since I want automatic classification of spam messages, I use
SpamAssassin. Just install spamassasin and enable spamd
in /etc/default/spamassassin
:
ENABLE=1
We will use spamc
in the Fetchmail configuration.
My ~/.fetchmailrc
is a straightforward list of some mailboxes to fetch mail from.
I use the mda directive to skip the MTA and send mail through SpamAssassin and
deliver it to Dovecot via its LDA mechanism.
~/.fetchmailrc
:
set daemon 1200 # Poll at 10 minute intervals
poll blind.guru protocol IMAP: ssl;
# ... add more sources here ...
mda "/usr/bin/spamc -u %T -e /usr/lib/dovecot/deliver -d %T"
To avoid spreading access information in too many configuration files
I am using the ability of Fetchmail to use netrc
to retrieve account passwords.
~/.netrc
:
machine blind.guru login mlang password <hidden>
I am using Gnus to read mail, newsgroups and RSS feeds since many years now. It would be quite a mouthful to explain all the customizations I am using by now. But there is one very important bit in the context of this article: How to access the IMAP server? In my setup, Emacs and therefore Gnus is running on the same machine as the IMAP server. So I can avoid authentication at all. This configuration will avoid unnecessary password prompts or caching.
In ~/.emacs
or ~/.gnus
:
(setq gnus-secondary-select-methods '((nnimap "localhost"
(nnimap-stream shell)))
nnimap-shell-program "/usr/lib/dovecot/imap")
With this you should be able to subscribe to your IMAP folders from within Gnus with ease.
Sorting incoming mails into folders is now performed by the IMAP server
through Sieve scripts. Instead of changing Gnus’ configuration I now
edit ~/.dovecot.sieve
when I subscribe to a new mailing list.
If you add a new Sieve rule for a mailing list and the associated
folder does not exist yet, Dovecot will autocreate it, very convenient.
Now all that is left is a way for your mobile devices to read and eventually send mail. This is very much dependant on your network setup, so I am not going to go into any detail here. If you are accessing your mail setup from a tablet in your local network you might get away without tinkering with your router configuration. If you want to read/send mail on the go you need some way to get to your external IP. Either it is stable enough or you need some dynamic DNS service. You will definitely want to forward IMAP and maybe SMTP ports from your router to your home server. If you don’t have an existing SMTP server for your mobile device that accepts your outgoing mails you can also set one up yourself and deliver outgoing mails from your mobile device to the world with Exim or qmail.
I am personally using Exim since I am going with the default MTA for Debian.
Configuring Exim to take mail from iOS devices was as simple
as enabling an appropriate authentication method and adding an
account to /etc/exim4/passwd
. I have to admit though
that I don’t particularily like Exim’s configuration files.
That is why I ended up using dovecot’s LDA in the first place.
Yes, apparently there is a BrainFuck alike two-dimensional esoteric programming language called MarioLANG.
And GitHub even has an implementation written in Ruby.
I should really allocate a bit of spare time to write at least something in myself.
]]>Boost.Python offers a very nice and flexible way to interface C++ data types with Python. With just a few lines of code, and the proper linker flags, you get a Python importable shared object from your C++ compiler. This can be very productive.
However, there is one aspect of C++ data types that I couldn’t figure out how to interface with Python, which are C++ discriminated unions, or more specifically, heterogeneous containers. While Python has no problems with containers containing objects of different types, C++ does not make this very easy by default. Usually the problem is solved with a container of pointers to a base class, and various subclasses with virtual functions. However, this approach is not always practical, especially if the different types of objects in an heterogeneous container dont have many things in common. This is where discriminated unions come to the rescue. They basically behave like a normal union in C, but have an additional field which indicates the type of object currently stored in the union. Boost.Variant does exactly that, with a nice visitor interface added on top of it.
If we put the boost::variant<> template inside a STL container like std::vector<>, the result is a heterogeneous container. For the purpose of illustration, lets implement such a container. The example below is deliberately simple. In reality, the various types allowed in your variant will probably have more fields then just one.
#include <boost/variant.hpp>
#include <vector>
struct a { int x; };
struct b { std::string y; };
typedef boost::variant<a, b> variant;
typedef std::vector<variant> vector;
To ease creation of these two types of objects, we are going to write a few factory functions. We are going to wrap them in Python later on.
() { return variant(); }
variant make_variant() { return vector{a(), b(), a()}; } vector make_vector
Now lets create a Python module which exports the above functionality to Python.
#include <boost/python/class.hpp>
#include <boost/python/def.hpp>
#include <boost/python/implicit.hpp>
#include <boost/python/init.hpp>
#include <boost/python/module.hpp>
#include <boost/python/object.hpp>
#include <boost/python/suite/indexing/vector_indexing_suite.hpp>
vector_indexing_suite apparently needs operator== defined on the value_type of the container. In our case, this is our boost::variant<a, b> type. Luckily, boost::variant<> already provides operator==. However, that operator== relies on operator== being defined for the underlying types. Since equality comparison is probably useful for other things as well, lets just create operator== for our two classes a and b.
bool operator==(a const &lhs, a const &rhs) { return lhs.x == rhs.x; }
bool operator==(b const &lhs, b const &rhs) { return lhs.y == rhs.y; }
Boost.Python needs a way to convert our discriminated union to a Python object. This code relies on Python class definitions being present for all underlying variant types. We will define them later.
struct variant_to_object : boost::static_visitor<PyObject *> {
static result_type convert(variant const &v) {
return apply_visitor(variant_to_object(), v);
}
template<typename T>
result_type operator()(T const &t) const {
return boost::python::incref(boost::python::object(t).ptr());
}
};
And finally, lets create our Python module.
BOOST_PYTHON_MODULE(bpv) {
using namespace boost::python;
class_<a>("a", init<a>()).def(init<>()).def_readwrite("x", &a::x);
class_<b>("b", init<b>()).def(init<>()).def_readwrite("y", &b::y);
<variant, variant_to_object>();
to_python_converter<a, variant>();
implicitly_convertible<b, variant>();
implicitly_convertible
("make_variant", make_variant);
def
class_<vector>("vector").def(vector_indexing_suite<vector, true>());
("make_vector", make_vector);
def}
Lets create a shared object for Python.
$ g++ -std=c++11 -fPIC -shared $(python-config --includes) -o bpv.so file.cpp -lboost_python
We can load the module into Python and see what it does.
>>> import bpv
>>> variant=bpv.make_variant()
>>> variant
<bpv.a object at 0x7f06bb2130c0>
>>> variant.x
0
>>> variant.x=2
>>> variant.x
2
Nice. We can access the underlying type, and even modify it.
Lets see how our heterogeneous container wrapping code behaves.
>>> vector=bpv.make_vector()
>>> vector
<bpv.vector object at 0x7f20693289d0>
>>> len(vector)
3
>>> list(vector)
[<bpv.a object at 0x7f20693190c0>, <bpv.b object at 0x7f20693193d0>, <bpv.a object at 0x7f2069319440>]
So far, so good. This will at least make it possible to convert heterogeneous containers from C++ to Python, which was my initial goal.
Unfortunately, contained objects are not treated as references. Whenever retrieved, we get a copy. So in-place modification does not work.
>>> vector[0].x
0
>>> vector[0].x=2
>>> vector[0].x
0
However, we can override an existing element with a modified copy.
>>> e0=vector[0]
>>> type(e0)
<class 'bpv.a'>
>>> e0.x = 2
>>> vector[0] = e0
>>> vector[0].x
2
And we can also use the append and extend methods of Python containers.
>>> len(vector)
3
>>> vector.extend(vector)
>>> vector.append(bpv.a())
>>> len(vector)
7
>>> len(filter(lambda x: type(x)==bpv.b, vector))
2
>>> len(filter(lambda x: type(x)==bpv.a, vector))
5
>>> map(lambda x: x.x, filter(lambda x: type(x)==bpv.a, vector))
[2, 0, 2, 0, 0]
All that is missing for a perfect world is reference semantics for container elements. If anyone has a hint on how to achieve this, please let me know.
]]>This has definitely been fun! I've recorded a small piano piece from the movie Amélie (2001). It is on YouTube, to make it easy for various platforms to play it.
]]><–!more–>
Wing Chun promotes two principles which do indeed read and sound like it might be exactly what a blind person wants to practice: Close range and uncommitted techniques. I can't really tell you more just now except what you can already read on the Internet. Just one more very relevant pointer: Chi sao (sticking hands). The idea is to develop reflexes especially for close range combat, and the principle is to always stay in contact with your opponent, something that very much resonates with me (obviously, keeping in touch with your opponent is exactly what you need if you have no sight at all).
]]><–!more–>
I hate CAPTCHAs, for the obvious reasons. At first, it ment that I started to be excluded from all sorts of services on the net, basically everything that requires me to register an account and thinks of itself being leet or something. I find it particularily funny (in the chinese sense) that CAPTCHAs started to emerge after the W3C's Web Accessibility initiatives finally made some progress in educating web designers. So while the internet is now officially accessible (at least its easy to claim this today) they have now found a much better way to exclude us blind people categorically. They just pretend we are no humans anymore (thats actually nothing new in perceived real life, but it feels new to me in information technology).
Now, of course, you will say, these days there are audio CAPTCHAs. However, this is what I tried to use on twitter.com. They tell me they are looking for two words I am supposed to enter, and I am also supposed to not worry, the best guess is OK. As mentioned above, I tried this seven times. With some attempts, I didn't understand a single word at all, with other attempts, I understood way more than two words. It was never marked clearly which of the excess words are supposed to be ignored. No matter what I entered, I apparently failed to solve the CAPTCHA and proof my humanity.
And I am even lucky, I tend to think of myself as someone that does understand english quite well when listening to it, a ability that not everyone has in my country that has german as its primary language. Supposing that twitter is an international service and not really linked to english as a primary communication language, the audio CAPTCHA is also excluding all the people that do not speak/hear english very well. Besides, this point doesnt matter, because I bet you can't solve that CAPTCHA on first try even if you are a native speaker, its just too damn crazy.
So, what to do? I have no idea. I guess my frustration will just grow boundlessly. CAPTCHAs are the first events in IT that make me think about my ability to do this job in the future. If these trends persists, I dunno how I am supposed to take part in the Internet in the future.
]]>Most of the currently visible problems seem to be focus related, read this article if you are a Eclipse Dev and want to get an insight into what you could fix to make it even better!
Here is a list of the shortcuts I found very valuable as a blind eclipse user:
I can use my mobile phone to read the content of my monitor! This is the first time I feel like experiencing technology from the 21st century. From now on, if something fails unexpectedly I am no longer required to ask someone to read whats displayed on the monitor, I have a new possibility to try first. While this is not practical when you actually try to interact with a program, it can at least help to figure out what error message is displayed.
Yesterday evening I had a pretty simple job to do: Create a partition table on a new 2GB CompactFlash card, transfer old partition content to it and make the card bootable. So far, so good. I did what I needed to do, and then slotted the CF-card into the test machine. After turning the device on, the usual thing happened: nothing. Since the machine didn't show up on my network with the IP address I configured, something must have gone wrong. Damn. At this point you are usually pretty stuck if you are blind and without a sighted coworker. While serial consoles are a nice thing to have, PC BIOSes are usually a bitch when it comes to working serial consoles. This test machine has a pretty old BIOS too, so forget about that route. Since it was already past midnight, there was no one around whom I could ask to read the monitor content to me either. Which lead me to an idea: Why not use my KNFB Reader to figure out whats going on?
To make a long story short: Yes, it works! I had to turn off the light in my room to get optimal results, but after that, it basically works as reliable as with print on paper! The tabular data displayed didn't get read correct, but that was not what I was after. The last sentence my phone said was what I needed to know:
"Insert disk and press a key to continue"
So while I haven't been able to fix the actual problem at hand (the BIOS does not support CF cards larger than 1GB) I was at least able to narrow the problem down to the BIOS not recognising my new CF card as bootable media.
This opens up completely new possibilities of independence in my profession.
Now who is going to put together a special version of OCRopus(tm) for mobile devices? A coworker of mine is already thinking about writing a tool for doing things like color analysis using a mobile phone camera. I think this is a very exciting idea loaded with possibilities. The world needs more practically oriented open source projects related to optical recognition!
]]>