2.2 isn’t likely to satisfy all those concerns, but it is a step forward. There are some subtle but key aspects you might miss, since they aren’t quite headline news for most gadget sites. Apologies for an atypically technical post, but this stuff is important if the platform has a future for readers of this site. And if anyone doubts this is news, let me tell you – talking to music developers from a variety of backgrounds, I hear both immense desire to look at Android, and some significant skepticism about the limitations, many of them specific to performance.
Audio gets a separate priority. Changes to AudioManager mean that audio can gain focus. Currently, audio processes on Android often get preempted by other processes, so that literally, another service syncing data can screw up your sound. What I can’t tell here is whether the audio focus helps successfully prioritize sound – or if it just helps mix sound with other apps. At the very least, it should avoid other apps cutting into a music performance app without you wanting that. If we’re lucky, Google has also improved audio performance, so that audio apps can set shorter buffer times without adding clicks and pops. (I’ve found disabling data sync stops sound from skipping, but that’s not how it’s supposed to work – not even close.) It’s possible these issues could be positively impacted by improvements to the Java virtual machine (in fact, it’s almost a sure thing).
Media recording APIs are finally set up right. As Google puts it, “New APIs in MediaRecorder for specifying audio settings for number of channels, encoding and sampling rates, sampling rate.” Or, as I’d put it, less charitably, “MediaRecorder API no longer involves pain.” This is also a big deal for Android applications that take audio input, including in-progress ports of free synthesis environments Pd and SuperCollider, and Jasuto, whose developer got tripped up on this very problem.
MotionEvent has been improved, for better multi-touch event handling. Developer Luke Hutchison has had to manually write code that works around some unreliable multitouch processing. Google promises better reporting in the new version. Unfortunately, I don’t think we’ll know until this build ships; this can only be tested on devices.
Better storage. Flexibility in storage is a key advantage of Android (cough, Apple) in some areas, but needed work. Now, the ability to (finally) install apps to the SD card means that music-rich apps or interactive music albums become possible. And automated data backup could be a boon to people using Android as a creation device.
A smarter NDK. Debugging. ‘Nuff said. And that’s something native developers (see, again, SuperCollider and Pd) can download and use right now, today.
By the way, motion fans should be happy with doubled camera preview FPS and YUV support in OpenGL, among other tweaks.
Now, the bad news.
JIT (Just in Time) compilation for Java translates to vastly improved performance, so you can’t wish for more. But there are still things musicians could use.
Everyone’s been writing open letters to Steve Jobs. Let’s make this an open letter to Google.
Why should Google care about musicians? How about because Google threw down the gauntlet today on its comparative “openness,” compared life with Apple devices to 1984, and prides itself on its Linux heritage?
No, actually — let me put this better: don’t you want Android to be a rock star, Google?
- Native audio access. This NDK added still more support – native access to image buffers, which will help people writing graphics apps. But with that and OpenGL finished, audio should be next on Google’s list. Android devices all appear to use Linux audio standard ALSA. There’s really no reason native DSP code shouldn’t talk directly to the audio output. Google: pay attention. Audio apps have been some of the biggest hits on Apple’s App Store; Smule alone has made ocarinas, AutoTune, and Glee fandom cultural phenomena on the iPhone. Android really does need improved audio performance to compete. Unless a miracle has happened on the Java side, that means providing the NDK as at least an option. And it’s self-selecting: the only programmers who would even try to write their own DSP code and ALSA interfaces would be the ones who already know what they’re doing. You’re not going to get stupid questions about this on IRC like you do with people who haven’t read the UI documentation. Get it?
- Reach out to better multitouch hardware partners. I’m beginning to think that waiting for OEMs to stop sucking at multitouch on phones and tablets is a losing game. So, Google, you’ve got the biggest tech brand power on the planet now. You’ve got smart people. Find a way to hook up your mobile partners with the people who can make touch hardware and firmware work, and the whole platform wins.
- Hardware support (question mark). The Droid already supports USB host mode – so why isn’t it a standard? And why shouldn’t Android benefit from the Linux kernel and provide external hardware support in the API? Help the device live up to the “open” hype you keep espousing; it doesn’t make for a flattering comparison if the iPhone OS has more hardware features for developers than Android, especially with tablets on the horizon. So why a question mark? It looks like good stuff is happening here. Tablets show promise, and the announcement of Google’s TV product suggests not only video out but USB, too. So the key is, will developers be able to use those features? It’s not really an “open” platform if the answer is no. It just seems at this point like we’re waiting on standard APIs and documentation. TV video out is a safe bet when Google’s promised TV SDK appears early next year. But by then, Apple may have a similar offering – and it’d be unfortunate if Google didn’t extend capabilities to their whole line, rather than slice up the platform.
Specific as these things are, they could be the detail that makes a (pardon the word) “magical” app for the platform. And hey, that’s also mean some rockstars using Android. That can’t be too bad, can it?
I’m hopeful. And I think the vision of the platform could be extraordinary. Imagine an Android phone that connects to a music rig onstage, an audiovisual app that makes the audio output really shine, or an interactive album you can watch on an Android-powered TV accessory from your couch? On that last question, imagine people listening to albums in their entirety, blissing out to specialized generative visuals? (Return of the psychedelic prog rock album cover!)
Users, take all of this with a grain of salt, because I still want to test 2.2 to evaluate real-world performance. But it’s worth saying, because the fact that we have 2.2 means Google’s talented Android team is already moving on to the next thing.
Updated – More from I/O, and Native Audio Access
Here’s one major difference about developing for Android – developer events aren’t under an NDA, and you can find out about what’s happening to a platform in advance and respond. You can share information, follow open bug reports, and see as they’re changed. This shouldn’t be rocket science; it’s certainly a given in the free software community. But it is a marked difference relative to something else beginning with ‘A.’
Accordingly, while I couldn’t be at I/O, I do get to talk about it. And notes from a session on Advanced Audio Techniques suggests the 2.2 changes to audio are significant – and are accompanied by an upcoming hook for doing native audio.
From the roadmap notes, OpenAL and OpenSL may be the ideal native hook to the audio buffer. OpenAL itself leaves a bit to be desired for musical applications; it’s a fairly primitive API, not really a direct analog to OpenGL. But if well-implemented on Android, particularly in how it talks to the audio output, it could solve many of the problems Android users now face; it’s all in the implementation. What’s coming:
– OpenSL native API to provide audio track stuff in native code
– OpenAL support. Has a shim over OpenSL. should help port games.
– Add autio effects processing
– Expose more low-level APIs.. a Java api that lets you build the player graph in your application. If you have a streaming support that we don’t support, previously there hasn’t been access to codecs.
– WebM support. VP8 + Vorbis
– FLAC decoder
– AAC-LC encoder for device that don’t have hardware support
– AMR-WB encoder
(encoders will be open sourced)
You can read those notes from the session here, thanks to an attendee; I’ll post a link once this goes up on YouTube:
Via Bug 3434
Also very awesome – hardware-accelerated processing.
NDK FPU and SIMD NEON support means that Android applications can now take advantage of the floating-point and specific hardware acceleration features of the ARM-based hardware architectures underlying these devices. If there was any doubt that Android could become a major solution for embedded musical hardware, this should change that. (The iPhone/iPad will reap some similar benefits, so this is as much about the iron underneath as it is the OS.)
ARM claims 60-150% performance gains as a result. (Thanks to Martin Roth for the tip!)
I expect I’ll be talking to developers more at Droidcamp this coming week in Berlin.