I am excited about the possibilities that come with Project Glass. Based on an article I read on the New York Times, they said it (Project Glass) will be based on the Android software –
The glasses will use the same Android software that powers Android
smartphones and tablets. Like smartphones and tablets, the glasses
will be equipped with GPS and motion sensors. They will also contain a
camera and audio inputs and outputs.
I am a front/back-end web developer by profession. I learnt C++ and Java at university and actually retained this knowledge :). I’ve pretty much grasped the more advanced concepts of programming so let’s skip the basics.
Would gaining experience coding on the Android for phones platform better prepare me for coding on project glass when it comes out?
Practice As Follows
There are two ways to develop for Google Glass. There is the Mirror API that allows to create web applications to the platform. Besides that, an Android based SDK for building native apps was released recently.
The Mirror API is an RESTful API and the development model with it is very different from what we know from Android and even from traditional web apps. The web apps are thick clients that you can run some code on (in the browser). With Mirror API, the Glass device is to your application presented as a web service in the cloud, that you can control by sending queries over the Internet. That control is limited to inserting interactive cards in user timeline and reacting to user interactions with them. Every interaction involves an roundtrip from the device to your server and back. This in a way shapes what kind of applications will be possible with Glass. For instance, it seems you need to be always on-line for the apps to work at all.
It has some benefits also. Most importantly it allows you to write the app in any language that can deal with HTTP protocol, be it Java, Python, actually almost anything even like Haskell. The downside is that the app is not going to be running directly on the Glass device, but on either yours or Google provided computer (think App Engine).
Using the GDK that was released later, you can develop a native Android app APK and run it on the Glass Device. I am not following this closely, but I know doing this was more or less possible since late spring 2013, but then there was no end-user distribution mechanism in place so you had to enable USB debugging and push the APK on the device using the debugging tool. Doing this did not require root access.
With the SDK, you get access to all hardware sensors on the device (camera, accelerometer) and you can create more interactive experiences for your users this way. The negative is that such apps can be probably battery intensive and users can become suspicious towards it.
But to answer your question, experience with designing for Android will be definitely helpful in the general sense. If you think about that, Android devices are in fact wearable computers. The glasses form factor brings that to a whole new level (instead of reaching in your pocket, you have the glasses already prepared for use in front of your eyes), but still there are similarities.
In Android, as in Glass you are aiming to provide the user with access to information, communication or entertainment, taking into account the limitations of the platform, especially limited battery life.
People who will have Glass are likely to be Android users, so another argument may be that you can build on familiar Android experience when making an Glass app. Also, you can target them with your app both on Glass and Android and provide some unified experience when accessing your content. Maybe you even want to use the touch screen on the phone to control some aspects of the Glass app.