The World

[as I find it]

24 Hours with Android: Thoughts from a Debian User

leave a comment »

For a long time, I’ve been waiting for the ability to use a Wi-Fi connection on an Android device in conjunction with Google Voice to make and receive calls without a cell phone plan. A few recent changes in Google Voice and the availability of the GrooVe IP app made this look like a reality; so I bought myself a Nexus One. It arrived yesterday.

The basic setup was quite simple: I had no problems setting up the N1 to use my home Wi-Fi network, and GrooVe IP was able to use my existing Google Voice account to make outgoing calls without a problem. Freedom from telcos at last!, I thought.

Well, not quite. Almost immediately I encountered a problem: there is apparently a bug in the drivers or firmware for the wireless adapter that causes it to shut down and disconnect from the wireless router when the phone’s screen shuts off, regardless of the phone’s setting for the Wi-Fi Sleep Policy. This means that I can’t receive incoming calls unless the screen is on.

Ok, no big deal, I thought. This phone is running a relatively old build of Android (2.2); it’s the official developer phone; so I ought to be able to find an update or workaround that fixes this issue. I’ve been winding my way down a troubleshooting rabbit hole ever since.

As a Debian user, I’m not used to this kind of experience. Here are my thoughts about the Android platform and ecosystem, after 24 hours of trying to fix this issue:

The Android “community” is fragmented. There is (or was) a lot of discussion fairly recently about fragmentation in Android software. A related and, I think, bigger problem is the fragmentation in the user community. There does not seem to be a central place to look for information about software problems, or to look for help. Instead, there’s a mish-mash of ad-supported Web-based forum sites, generally with a very low signal-to-noise ratio. Reports from frustrated users are many; but follow-ups from technically capable people are few and far between. If a solution is posted at all, it’s difficult to tell whether it worked, and whether it would be worth trying yourself. Sometimes, someone posts link to yet another third-party site with instructions to load a .zip file they’ve posted onto your device — not exactly confidence inspiring, to say the least.

Compared to the Debian world, this is like being back on Windows: authoritative information is almost impossible to find in the user community, and even when information looks authoritative, the fact that it’s posted on some random Web site instead of an official wiki or mailing list causes me to think twice.

There are a few things that Google and other stewards of the Android ecosystem could do to fix this community fragmentation:

  1. Create a single centralized mailing list for Android users. It should be a mailing list, or series of mailing lists, along the lines of those found at lists.debian.org. Having a mailing list, as opposed to a Web-based forum, allows proper threading and quoting (and doesn’t require clicking through artificial pagination) which makes it easier both to post and to find high-quality information. Technically-adept users will be more likely to provide good help to others if there is a single place to do so, and that place is accessible via e-mail and/or NNTP. (Of course, a Web-based interface is still useful, especially for searching the list archives, but shouldn’t be the primary means of access.)
  2. Create a centralized wiki containing articles about common problems, HOWTOs for addressing them, and attachments of files that users may need.
  3. Use these mailing lists and wiki for communication with the community. Of course, a mailing list and wiki won’t solve the signal-to-noise ratio unless the people who actually maintain Android have some involvement in it. Representatives from Google and from the handset manufacturers should regularly answer questions on the list, and they should enforce a certain amount of netiquette: users should be encouraged to search the archives first, ask detailed questions, refrain from flaming, etc.

This last point brings up another important issue: there seems to be a huge disconnect between “users” and “developers” in the Android world. Contact between users needing technical assistance and the people who can actually provide that assistance is few and far between; and often, the message amounts to, “We’re looking into it. Sit tight until the next release!”

To their credit, Google appears to want some of this interaction to happen. They have “community managers” who occasionally respond to user issues in the Google-hosted forums. Unfortunately, questions from users far outweigh messages from community managers, and few users are in any position to help one another. The contrast with the Debian users list is stark: in the Debian world, there is no artificial boundary between (enabled, enlightened) developers and (disabled, helpless) users; there are just users with varying degrees of knowledge, who contribute what they can when they can.

Google also has a publicly-accessible issue tracker, including a feature that allows additional users to upvote an issue. This is good for what it’s worth, but it doesn’t seem to have much influence on the direction of Android development. One gets the sense that Google uses a different, internal issue tracker for Android bugs and features. Android releases, for example, do not seem to list the specific issues that they close or address. So though some of the information I’ve seen indicates that my particular problem has been solved in Android 2.3.4, I can’t find any official confirmation of this.

Again, my point of comparison here is the Debian bug tracking system, where one sees regular contact between maintainers and users, and the relationship between issues and releases is made public and obvious.

I would much prefer to see a single issue-tracking system for Android that’s used by both users and developers, and that makes it obvious to anyone who cares to look exactly which issues are being worked on, when there’s a patch available, and when a fix will be available through an update.

Finally, there is the issue of the opacity of the update process. Android updates are rolled out to users a few at a time, without any explanation of why one group gets them before another. It’s possible to find instructions for installing these updates manually, but again, the information comes from third parties, not the Android developers, and it feels far from officially sanctioned. I have no idea why my phone thinks it’s up-to-date at version 2.2 (remember, it’s an unlocked, developer phone) when others have gotten 2.2.x or 2.3.x updates over the air.

I understand that for many users, updates come via their cell carrier, which has its own set of headaches. But for those users who are running an unmodified, unmediated Android on officially-supported hardware, I can’t understand why there is no equivalent of apt-get dist-upgrade the moment a new stable release is available.

All of this leaves me feeling a little helpless in the face of what I thought was going to be a minor problem. I want to like Android, and I hope the project succeeds in building a stronger community and a stronger relationship between users and developers. From where I sit, though, it has a long way to go.

Written by whereofwecannotspeak

June 25, 2011 at 4:09 pm

Posted in Uncategorized

Some neat e-reader tools

leave a comment »

After deciding some months ago that none of the currently available e-readers could meet all my needs, I didn’t think about them for a while. The recent “price war” between Amazon and Barnes and Noble has me reconsidering: at less than $200 for a Nook or a Kindle, I could probably live without some of the hardware features I want, especially if there is software that can help bridge the gap.

I’m still undecided about buying a device, but I wanted to catalog some of the programs and hacks that I’ve (re-)discovered as I looked into the issue a second time.

  1. Calibre looks like a great piece of desktop software for managing e-books. It knows how to talk to multiple e-readers, and can inter-convert between popular formats, allowing (e.g.) Kindle readers to read ePub books by first converting them to Mobipocket format.
  2. Savory extracts some of the code used by Calibre to allow Kindle users to download and convert ePub books directly on their reader, without having to go through desktop software.
  3. epubjs is a nice ePub reader written entirely in JavaScript, so if the Kindle and other non-ePub devices ever acquire a JS-enabled browser, or even just a local JS interpreter/engine, this will be another option for reading ePubs there. There is also a post at Ajaxian about some other JS-based ePub readers here.

The main issue I have, though, is in dealing with PDFs. It doesn’t look like I’m going to be able to justify the expense of a large-format e-ink reader in the near future (the Kindle DX is the cheapest, I think, at $489!). I’m still looking for a comprehensive set of tools for manipulating PDFs so that I could read them easily on a smaller screen. Specifically, I need tools for:

  • extracting text from text-based PDFs, or at least being able to reflow them and trim their margins
  • converting scanned images of book pages in PDFs into text via OCR software

I haven’t found a complete solution for either task, but I have come across various programs that do some of these things:

  1. PDFMunge is a Python program that can help with the task of trimming margins and reflowing text in text-based PDFs.
  2. pdftk is a comprehensive Java library and command line tool for manipulating PDF files.
  3. Google Docs now has the option to use OCR to convert PDFs to text. It doesn’t work perfectly, especially for more technical material, but it’s easy to use. I believe it is based on Ocropus and/or tesseract-ocr, both of which are Free software and can be built and run locally (if you can figure out how to do so…the dependencies are pretty significant).
  4. Briss looks like a nice way to crop scanned PDFs using a GUI interface.

Written by whereofwecannotspeak

June 25, 2010 at 1:08 pm

With all the e-readers out there, why can’t I find one I want?

with 3 comments

Lately, I have been intrigued by the prospect of buying an e-reader, mostly because I find myself printing and carrying around an enormous amount of PDFs. As a graduate student, I have to read quite a lot, and it would be great if I could keep all my readings in one place, with notes, in a searchable format. I don’t much like reading on a computer screen, so a reader with an e-ink display seems like it would be a great solution for me.

Sadly, none of the e-readers available today seem to have the full set of features I would want:

  1. e-Ink Display: I can’t read for long periods on an LCD, so that rules out something like a smartphone, netbook, or tablet PC.
  2. Expandable storage: one of the big downsides of the current Kindle is that its storage is limited to the 1.4GB available to you when the device ships. I especially can’t understand why Amazon removed the SD card slot that the Kindle 1 had.
  3. Physical keyboard and note-taking abilities: this tells in favor of the Kindle, but against the Barnes and Noble Nook, as well as against a lot of the other e-readers I have seen. I want tactile feedback when I’m typing; I can’t stand typing on touchscreens. Some readers appear to offer no textual input at all, which isn’t much use to me; I need to mark text as I read.
  4. Wi-Fi: another black mark against the Kindle 2. It’s nice that Amazon, Barnes and Noble, and others want to offer me free 3G service as a way to ensure that I can impulse-buy books from anywhere. But without Wi-Fi as a fallback, I am scared off by clauses like this one in Amazon’s License Agreement and Terms of Use:

    Amazon reserves the right to discontinue wireless connectivity at any time or to otherwise change the terms for wireless connectivity at any time, including, but not limited to (a) limiting the number and size of data files that may be transferred using wireless connectivity and (b) changing the amount and terms applicable for wireless connectivity charges.

    Having a fully Internet-ready e-reader would make it much more useful to me, but it’s clear that Amazon wants nothing of the kind. They want to control the kind of information I can get, which they couldn’t do if they included a network interface that — horrors! — didn’t route all traffic through their servers. I don’t really want Amazon, or any other company, knowing everything I read online. And I don’t trust them not to “discontinue…or otherwise change the terms” of my Internet access through their blessed portal.

  5. Support for open formats, including PDF and ePub: most of the non-Kindle readers win here again, though it’s not clear how much of the PDF standard any of them supports. A lot of the PDFs I read are scanned images from actual books, and I would like to have simple tools for cropping pages, or splitting one page into two, to better fit a reader’s screen.
  6. Extensible platform: I’d like to be able to write my own programs, or download others’ from the Internet, if the built-in software doesn’t cut it — preferably without having to root the device. For doing academic reading, programs like a multi-lingual dictionary or a citation database would be helpful. Amazon has a Kindle Development Kit in the works, which is nice, except that it’s Java-based; I would much prefer, say, a combination of Python and C. (I’m not sure yet if JVM implementations of Python, Ruby, Scheme, etc. will work on the Kindle…but that would be great!) The Nook has nothing so far, but the fact that it’s running Android points to hackability in the future, with or without Barnes and Noble’s support. Other readers have more explicitly open software platforms, but without a large number of users, they probably won’t see much development.
  7. Low price: the $259 that Amazon and Barnes and Noble are currently both charging is about as high as I would be willing to go. I simply can’t afford to sink $300 or more into a highly specialized device. This unfortunately rules out a lot of the lesser-known readers for me, because they don’t have the agreements with publishers that would allow them to subsidize their hardware with e-book sales.

So what am I to do? I’d love to be proven wrong, and find a reader out there that has all these features and more. But until then, I think I’m stuck with paper.

Written by whereofwecannotspeak

February 13, 2010 at 3:26 am

Posted in Free Software, Geeky Shtuff, Ideas

Tagged with , ,

How to Save the Newspapers: (Yet Another) Proposal

leave a comment »

I have seen a number of articles recently about how and why the American newspaper is dying, and what to do about it (cf. Bring Back the Evening Paper!, Final Edition: Twilight of the American Newspaper, Network Effects, YCombinator’s New News idea (#3), and much ado about Rupert Murdoch pulling his various media outlets’ content from Google).

My own experience tells me that part of the reason papers are struggling, either in their print or online versions, is that they are locked into a terrible revenue model based on advertising. Ads take up greater and greater space in the physical paper, and seek more and more attention on a digital page. They are typically not relevant to my interests, and range from nuisances (an extra page turn to get past the spread for expensive perfume) to actual intrusions (graphics that cover the news content in my browser until I explicitly close or disable them). The papers are apparently in a tight spot: fewer people are subscribing, so they must rely on more advertising revenue to operate; but since readership is declining, individual ads are worth less. Papers must therefore be more aggressive about the number, size, and flash of ads they print or display; this furthers annoys, and drives away, readers. Of course, they can’t be as good at the advertising game as online media can — they can’t personalize print ads, they can’t print anything animated or with sound, and they probably can’t afford to keep separate advertising departments and print different editions for every city they sell papers in — so their advertisements will never be as targeted or effective. Their value will decrease accordingly, leading to further reduced income.

If left unchecked, this cycle will result in papers with more and more advertising, and fewer and fewer readers, until they can no longer afford to operate. That’s the end of a paper, which, for many reasons, can be a damn shame.

The problem here comes from the assumption that the papers must maintain a certain revenue level to remain profitable. They turn to advertisers to fill in the gap in revenue left by subscribers, which slowly drives more subscribers away. One way to save the newspapers, therefore, is to do away with that assumption. Instead of looking to maintain revenues, they should cut costs.

They shouldn’t cut costs in the ways they have been, however: closing bureaus, laying off writers and editors, putting more content online with less editorial review. These are cuts to the things which make papers valuable. The value of a newspaper is in the professional training and the contacts of its reporters and editors. Despite what proponents of “New Media” might say, trained journalists still have an edge in their access to world events and the people involved, their ability to report those events in a straightforward way, and their knowledge of how to interpret and draw connections between those events and others.

So where should the papers cut? There is one major area that, apparently, they haven’t considered: getting out of the printing business. This doesn’t mean that papers shouldn’t be printed; it just means that someone else should print them. Specifically, newspapers (especially national newspapers) should make their content available, free of ads, to anyone who wants to print and sell it — for a fee. This fee might be subscription based, or it might be on a per-paper basis, but it would allow anyone who thinks they can efficiently print (and possibly deliver) a newspaper to do so. The printers would then turn around and sell the papers to individual customers: newsstands, coffee shops, delivery services, even individuals.

Ideally, newspapers should allow printers broad freedom to modify the content of the newspaper, so that printers can seek to make a profit by printing different formats, placing local ads, and so on. The only way printers would be likely to sign up for this arrangement, after all, would be if they thought they could somehow do a better job printing and distributing papers than the newspaper companies themselves could, so they would need to have an arrangement that allowed them to apply their own expertise. The newspapers might dictate some aspects of the printing — for example, they might not let their title be printed if certain severe content modifications were made — but the fewer restrictions on the printers, the better. If a printer wants to use a different font because it uses less ink or looks better, or a different paper quality because it’s cheaper in his area, or employ any other strategy for making money selling copies of the news, he should be able to do so.

I have no numbers to back it up, but I think this strategy has prima facie plausibility because it puts both aspects of the newspaper business in the hands of people who know it best. Newspaper companies would be paid for producing the content which they are experts at producing: writing about the news, in all its many forms. Printers would likewise be paid for the fruits of their expertise: physical copies of that content, produced cheaply and sold in a relatively small local area. Physical newspapers, like many information resources, suffer from a “last mile” problem that small, local businesses are better equipped to solve than a central printing authority. By shifting the cost of producing physical papers to the people best equipped to solve that problem, newspaper companies would be free to focus on the news, instead of on how to sell and print advertising. With their revenue source back in alignment with the actual value of their product, they might just be efficient enough to stick around.

Written by whereofwecannotspeak

December 27, 2009 at 10:14 pm

Posted in Ideas

A Distributed Funding Model for Free Software Development

leave a comment »

One of the frequently-touted benefits of using Free software is that users are not helpless: if a program doesn’t work, or doesn’t do quite what they need it to do, they can either fix the problem themselves (because they have the source code) or they can ask someone else to fix it, possibly for a fee. Users with the same needs can pool their resources to ensure that those needs are met. No user is dependent on a specific developer or company to make the changes they need.

In practice, it is difficult for individual users to exercise this freedom. This is because:

  • users may have no idea who to ask to fix a problem or add a new feature; given the nature of most Free software projects, finding the right programmer might be difficult
  • even if a user knows who to ask, the programmer or development community might be unwilling to fix the problem because they don’t see it as worth their time
  • if the programmer or development community offers to fix the problem or add the feature for a fee, the user may not be able to afford it herself, and may have no idea how to find other people who are also willing to pay for the feature or bugfix

So, although users of Free software have an important freedom in theory, they are still often unable to reap some of the benefits that Free software promises, such as avoiding dependence on a particular developer, or pooling their resources to support development.

This may be one reason why users of popular desktop systems like Ubuntu are often perfectly happy to install proprietary software: even if they know the advantages of using a Free program, without a way to exercise their freedoms, they might as well use a proprietary program that works better or has commercial support.

Meanwhile, because programmers don’t often have a way of getting paid by users to write or adapt Free code, there is a common perception that the only way to earn a living writing software for the masses is to make it proprietary, or to be hired by a large company that will pay you to write software they need. It is difficult for programmers to earn money just by contributing to the Free software projects they are most interested in; sometimes, they ask for donations, but I doubt that many of them expect to earn much from them. We continue to see proprietary programs offered for Free systems as a result. Ubuntu’s Software Center is the latest incarnation of a solution to this perceived problem: Canonical plans, eventually, to sell proprietary applications alongside Free programs as a way of earning money to support their organization. They also think it will attract more programmers to GNU/Linux generally. I hope to be corrected, but I don’t see any part of their proposal that will alleviate the bad choice developers currently face between writing Free code and earning money. They aren’t proposing a payment system for Free software development.

This problem seems eminently solvable. I think we can and should fix the problem of allowing users to exercise their freedoms, and helping developers get paid to write Free software, at the same time. I have a proposal for how to do so: let users pledge contributions toward bugfixes and feature implementations. The basic idea is that the issue trackers used by many Free software projects could easily be extended to allow users to pledge contributions toward particular programming projects; when those projects are completed, each user would pay their pledge to the programmer or team who submitted the patch.

Imagine the following scenario, for example. Janet notices a bug in the way her wireless card works in Ubuntu. She searches on Launchpad and discovers that others have the same problem; there is an open bug report for her issue. Though she has no programming knowledge herself, she is willing to pay $50 toward getting this bug fixed, because it is a major inconvenience and she has no workaround. She clicks a “Pledge now” button in the Web interface and enters this amount. Meanwhile, Katherine, a savvy kernel hacker, notices that a lot of people are having the same problem, and spends some time improving the wireless driver for Janet’s card. Her patch fixes the issue, and the bug is closed. Janet (and anyone else who pledged) are notified that a patch is available, and that they must forward their payments to Katherine. Katherine earns a modest sum for her work, allowing her to spend more time fixing wireless card issues than she otherwise would be able to.

Obviously, there are a lot of logistical problems to be solved here, such as:

  • What happens when multiple patches are required to fix a bug, or multiple programmers work on the fix? How should the payment be distributed among patch writers, package maintainers, and others who contribute to improving the quality of the software?
  • How do users get a hold of the new version? Do they wait for the patch to move through the normal packaging process before sending their payments? (What if a fix is already available upstream at the time a pledge is made?)
  • Who collects the payments, and who ensures that they are made? (Would an arbitration system be needed?)
  • What happens to pledges if a bug is marked invalid, or made a duplicate of another?
  • What if the patch fixes the problem for some users, but not others?

I think these problems can be addressed by a little engineering and/or “social” innovations. They make a case for experimenting with different informal payment mechanisms to see what works best; they don’t constitute an argument against pledge-based payment systems in general.

The real hurdles presented by this approach are more abstract. They crop up whenever money is introduced where it was previously absent. Payment systems could significantly re-structure Free software communities, and possibly even threaten their existence. Although a payment system could help users exercise their freedom and help programmers to earn a living writing Free code, it might do so at the expense of the volunteerism and collaborative spirit that have made the Free software movement so successful. Programmers who currently work on a project as much as they are able, for their own satisfaction, might become resentful if others start earning money to do the same kind of work. Project leaders might find themselves having to manage payments, in addition to the difficult technical and social problems they already work to solve. And while some users would be able to exercise their freedom more effectively than they can now, the extent to which they can do so would essentially be a matter of their ability to pay, a difficult moral problem in its own right.

For these reasons, we should proceed with caution. I think organizations like Canonical can and should help design these payment systems; after all, they have an interest in making money by developing and distributing Free software. But it should be up to the teams behind individual projects to participate in any payment system, and to decide how they will distribute payments among their members.

Written by whereofwecannotspeak

December 22, 2009 at 4:12 pm

A Small Bit of Enlightenment

with 3 comments

Someone once asked me in an interview: “What is your favorite Unix tool?” Not sure of what to say, I simply tried to avoid appearing ignorant, and replied: “grep.”

Thinking about this problem again, I realized that there is a much better answer. Today, my favorite Unix tool is: the pipe.

Written by whereofwecannotspeak

November 17, 2009 at 12:54 pm

Posted in Geeky Shtuff, Ideas

Tagged with , ,

Getting Gnus to read mail over IMAP

with 2 comments

I have struggled, off and on, to get Gnus, the Emacs mail reader, set up to read my email. Since I’m mostly coming from the world of GUI email clients, without any experience using a newsreader (not that I wouldn’t like to — I’ve just never had access to a news server), Gnus presented some conceptual hurdles.

Gnus treats mail like news, meaning you “subscribe” to various mailboxes, and once read, email is simply hidden unless you explicitly ask to see it. The idea is that the activities of reading mail and news are very similar — you want to see what’s new without having the old stuff around, sort or filter incoming messages, write responses, then get on with your day — so the protocol used to access them (NNTP vs. POP, for example) shouldn’t make a difference in how you do them. Gnus therefore unifies these activities in a common interface, and deals with different protocols and storage methods with different “virtual servers,” i.e., back-ends which find mail or news wherever they may be. To use Gnus, you must learn how to tell it the details of how to access your messages via one or more back-ends.

So, I went to try to find the right back-end for an IMAP mail account. This raises a further complication: unlike most of the mail back-ends, which assume mail is stored locally, IMAP stores messages on the server. This makes the IMAP back-end more like a news server back-end than a mail spool back-end.

After reading the manual section on IMAP, I had tried a variety of settings that didn’t quite seem to work: although I could connect to the server, and read my mail from the Server Buffer, I couldn’t seem to “subscribe” to any “groups”, or split my IMAP mail into these groups. Fortunately, this excellent article provided a complete, working IMAP example. I’m hoping to be a Gnus addict by the end of the summer.

UPDATE: I’ve been successfully using Gnus for a few months now. Here are some relevant bits from my .gnus file to get any new users started:

;; GMANE is about the only free news server I've seen.
;; I set it to my primary server so I can read a few Free software mailing lists.
(setq gnus-select-method
     '(nntp "news.gmane.org"))
;; Mostly, though, I just want to read my mail.
;; This setup uses a standard SSL-based connection to read the mail for the accounts I have through
;; UC Berkeley:
(setq gnus-secondary-select-methods
      '((nnimap "calmail" ; primary email
		(nnimap-address "calmail.berkeley.edu")
		(nnimap-server-port 993)
		(nnimap-authenticator login)
		(nnimap-expunge-on-close 'never)
		(nnimap-stream ssl))
	(nnimap "ocf" ; secondary account
		(nnimap-address "mail.ocf.berkeley.edu")
		(nnimap-server-port 993)
		(nnimap-authenticator login)
		(nnimap-expunge-on-close 'never)
		(nnimap-stream ssl))))

Written by whereofwecannotspeak

July 15, 2009 at 3:24 pm

Posted in Free Software, Geeky Shtuff

Tagged with , , ,

Follow

Get every new post delivered to your Inbox.