Archive for the ‘mobile landscape’ Category

Wearable Tech – The New Frontier for Learning


(This editor from the motley fool sums up how I see the current state of wearables.)

I started learning about mobile technology in 2006 and I really saw the potential for learning. Now we are starting to see some great implementations in the way of mobile technology employed as a learning tool. From the more advanced technology of augmented reality applications to simple text messaging (SMS), people are learning and learning to use their mobile phones and tablets for learning in the moment of need.

Wearable technology is an extension of mobile technology, beyond the smartphone and tablet. We are at the beginning and wearable technology will advance and find it’s way to our clothing and our accessories (watches, bracelets, glasses, shoes, gloves, etc.). There are a number of accessories available already:

Google glass: Glass is a very compelling learning platform. There are even some simple things you can do right now to create Google glass content with the Mirror Api, which allows you to create content in HTML, video, rich media and text, so it’s not a big leap for many of those with some basic technical skills. I like the format of the Mirror API because it’s a card style layout and that has worked really well with mobile learning products.

Smart watches (Galaxy Gear, Pebble to name a couple): Right now, these watches are seeming to fit a real nitch in the way that they provide notifications, which is unique compared to smartphones. Smartphones can typically provide you with an audible notification that something has happened (example, a sound when a text message arrives), but they require you to remove the phone from your pocket to check the notification and that interrupts the flow of what you’re doing. A smartwatch can provide that same notification and you can react by simply looking at your wrist. If you don’t need to act, then you don’t need to interrupt your flow. A passive indicator can be helpful in retaining attention and focus, which we know is key in learning. There’s lots more to think about when it comes to how smartwatches can be useful to the world of technology enhanced learning, so we’re just getting started.

There will be a lot to come on this topic and how we can further leverage wearable technology for learning purposes. In the meantime, check out this podcast and the article below to gain a perspective on wearables.

http://www.wired.com/gadgetlab/2013/12/wearable-computers/

Advertisements

Where Does the mLearning Go?

learning within the task

Task based learning with mobile devices

 

We talk a lot about mobile learning in a general sense. Most learning professionals agree that it’s at least another tool in our arsenal and certainly could be very valuable to learners. But two questions come to mind when I reflect on my conversations with students and learning professionals:

Where does mLearning fit in?

What does mLearning look like?

I wouldn’t want to suggest that we only use a particular strategy for mLearning. Like all technology, mobile should only be used when it makes sense and helps your learners accomplish a learning task. But one place that you can really start to help your learners is within the task that the learner is performing. This means that you need to know what your learners are doing. I recently did a survey of my core group of learners within my previous company (I just moved to a new employer). The survey focused on a few things, but mobile tasks were one of the major areas. I wanted to know what the learners within my group were doing with their mobile devices… so I asked them, and I got some good answers. I did a session on this at the eLearning Guild’s latest online forum, and I found that while phone calls and email were the two biggest mobile activities performed by our learners, text messaging and web browsing/searching were right up there. These results may not surprise you, I guess I figured that communication would be one of the most useful functions of a mobile device! But knowing that people are using their mobile browsers, their voice capabilities and their text messaging capabilities allows us to think about how we could embed learning into those capabilities.

I’ll take a cut at the first of those in this post, and I’ll cover the others in subsequent posts. Let’s start with voice calls:

Voice calls – how can we support learning before a phone call takes place?

My ideas: Most of my learners had iPhones or Android phones. My first reflex is to use the browser. We know that those learners can use a WiFi connection on their device while making a phone call (provided that one is available). So we could look to build a simple interface to support those learners with their corporate phone calls by providing access to different learning resources that are designed to be easy to read and otherwise accessible to our mobile learners. I believe the simplicity of the interface and the content is key because the learner’s attention will be divided between their phone call and their attempt to view the resource. The content could range from immediate data to support the substance of the phone call to coaching suggestions that a learner could reference when talking to a client or even a checklist of things to cover during the call. You may say that some of these are straight performance support and not “learning”, but I am in favor of learning professionals owning all of that since we are the ones who know how to structure content for learning… why shouldn’t we be making the performance support content?!

Another option – Provide voice coaching to the person who is in the conversation. You could help learners by embedding actual coaching through voice to the learner. This strategy has been used to teach and coach help desk and support technicians for some time now and has shown itself to be effective in the field of customer support.

Another option –

Provide text messaging based question and answer services. Basically, a learner could be on a call and send simple text message questions to a system or individual. The individual or automated system on the receiving end would respond immediately with an answer. People use this method all the time when they are on a phone call with one person and they need information from another. I was recently on a call with one friend, who asked me what time my flight landed, I was visiting him in his city. I didn’t know, so I sent a text message to my other friend who bought the tickets since we were traveling together. I got an answer back during the phone call and was able to provide an answer. We could automate this model with any number of text-based Q&A systems (just do a search).

These are just suggestions, so feel free to comment on your thoughts and suggestions. In the next post, I’ll make some suggestions regarding email.

 

Enterprise Mobility and Mobile Learning

Enterprise Mobility

Many predictions for 2013 include the rise of enterprise mobility. I know that my company is pursuing an enterprise-level plan and they are not alone. Many companies and organizations from small private companies to large government agencies are beginning the move to enterprise mobility and most have long ago abandoned the notion that it’s a bad thing. This is all good for us as mobile learning technologists, designers and developers. My current focus is on building an enterprise application for my company. I like the idea of working on internal applications for a number of reasons. First, enterprise applications can really allow you to focus on the problems you see on an every day basis within your organization. You can build an application that facilitates better communication or connectivity between employees or departments and you can boost overall collaboration by doing so. You can build an application that provides quick information for employees who need to do specific tasks. For example, your department may rely on up-t0-the-minute metrics in order to make decisions so you could build an application that shows that information at any time. And since employees are most likely to have their mobile device with them at all times, their situational awareness would be improved by using your application on their device.

If you’re an instructional person and you’re reading this, you may be struck by how much these examples and ideas sound more like performance support and productivity applications than learning applications and you would be right. Most of the thinking I do about mobile learning applications seems to come back to information delivered at the point of need or learning content that helps someone do a job. I used to think of learning and performance support as two different things. Now, I think about learning as a huge, broad container that includes performance support and all the other things that we traditionally include when we think about learning. I think there’s space for all of it with mobile learning. I find myself learning with my mobile devices out of boredom and curiosity as well as for an immediate performance-driven need. However, I seem to hear some in the instructional design community who don’t think performance support is learning. Personally, I don’t think that is an important argument to have because we should really be owning all learning and not simply what we have traditionally owned. As an employee of a company that continues to implement a mobile enterprise strategy, I will continue to think about all ways to help out, whether it’s performance support applications, informational applications or other training applications because they all help our employees do a better job and when people do a good job, they are happy. Overall, let’s own the mobile learning space, including performance support because I think we as instructional designers/developers are better equipped to do that than most others!

A great new app, Coach’s Eye, knows how to use its senses

Coach's Eye App on Sale

Yesterday, I posted about mobile learning and using the sensors on the device in your learning design. Tomorrow a great app will launch in Apple’s App Store, called Coach’s Eye, from TechSmith. Coach’s Eye is an app designed to help coaches, parents and teammates evaluate an athelete’s performance and provide feedback through video. Think about it as if you’re the commentator watching the game with the magic pen that writes on the screen. I had a chance to preview this application and I can tell you that it’s easy to use and provides something I haven’t seen in any other apps, the ability to review and slow down video so you can provide feedback in a structured way. The end product is a video that you, the coach, produce with your feedback.

Among other things, Coach’s Eye allows you to slow down video to highlight certain places for improvement. You can highlight by drawing a box, a circle or lines and the best part is that you can comment on the video to give verbal feedback. You can then send the video to the person you’re coaching so they can concentrate on areas to improve.

Once you take a look at this app, you’ll immediately see how useful it can be for an athlete. I personally used it already to start working on some improvements to my baseball swing. I intend to keep using the application for that purpose. However, I think this app can easily be used in the broader training world. Think about a scenario where you or a coworker are charged with performing a task. A simple example would be the use of a specific piece of equipment like a printer or even a piece of software. Coach’s Eye would be beneficial because you could record a procedure and highlight certain things along the way while also providing verbal direction to the user.

The best thing about Coach’s Eye is that the designers and developers took the approach of using the device’s sensors. They realized that a mobile device has both added functionality and limitations when compared to a desktop or laptop computer. And since a mobile device has a camera and can easily be manipulated to provide good video in any environment, why not leverage that strength to allow the user to do something other than consume the content of others… you actually create your own learning content with their application!

I give kudos to the developers at TechSmith for building a focused, easy to use application. Like a lot of good applications, they stuck to a simple, intuitive design and they make it fun with a colorful interface.

Disclosure: I do not work for TechSmith, and I don’t have any official affiliation with their company. I was able to get on a list of testers for Coach’s Eye. I believe the app and the concept of coaching through the use of mobile devices are both heading in the right direction.

mLearning: ‘Using Your Senses’

sensor

Much of mLearning has to do with repurposing existing content or building modules similar to other eLearning modules. I see the value in that particularly for compliance training and other types of information dissemination. Sometimes learner’s needs are met by simply having content available in multiple places (i.e. desktop and mobile device). So I don’t discount the value of mobile courseware, I just think that the design community often forgets to think about the differentiation between mobile devices and desktop computers. Besides the always on, always connected, always with you nature of mobile devices, they also have a number of different sensors that we can utilize in our learning design.

Through the browser, you have access to geolocation through the devices location sensors (GPS and WiFi can be used to access the location of the device  http://mobile.tutsplus.com/tutorials/mobile-web-apps/html5-geolocation/). And with native applications, you can access the camera, accelerometer, GPS location and any other sensors on the device.

So how can you start to use these in your designs? That’s my question to the reader. Obviously, there are several considerations when building a mobile learning application. You may have some great ideas for sensor usage, but you may not have the staff to build the application or maybe you are building the application yourself (like me) but you know you’re going to have to build a web app so multiple devices can access it outside of an app store. These considerations are just a few of those contributing factors to your design. But let’s not throw away the idea of using sensors in our design. Instead, consider how the user would benefit from the use of sensors for contextual relevance, documentation, interactivity and engagement. From there, you can walk your design back to the realities imposed by the resources you have to work with from a development and implementation standpoint. My hunch is that you will find that if you consider these additional sensors at the beginning of your design process, you will end up with a better mobile design in the end. 

So the question again: How would you leverage sensors for your mobile learning design?

Here are a few developer resources to get you  started if you have to use the browser for your mobile design. Native applications can access the sensors through their native programming environments like iOS and Android:

accessing accelerometer in the browser:
http://www.mobilexweb.com/blog/safari-ios-accelerometer-websockets-html5

accessing the camera roll with ActionScript 3 (Adobe Flex and AIR)
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/media/CameraRoll.html

accessing the camera with other scripts:
http://code.google.com/p/iphone-photo-picker/

And technologies like PhoneGap and Titanium also provide for access to some sensors and you can build native applications using web technologies with PhoneGap and Titanium.