Archive for the ‘mobile user experience’ Category
(This editor from the motley fool sums up how I see the current state of wearables.)
I started learning about mobile technology in 2006 and I really saw the potential for learning. Now we are starting to see some great implementations in the way of mobile technology employed as a learning tool. From the more advanced technology of augmented reality applications to simple text messaging (SMS), people are learning and learning to use their mobile phones and tablets for learning in the moment of need.
Wearable technology is an extension of mobile technology, beyond the smartphone and tablet. We are at the beginning and wearable technology will advance and find it’s way to our clothing and our accessories (watches, bracelets, glasses, shoes, gloves, etc.). There are a number of accessories available already:
Google glass: Glass is a very compelling learning platform. There are even some simple things you can do right now to create Google glass content with the Mirror Api, which allows you to create content in HTML, video, rich media and text, so it’s not a big leap for many of those with some basic technical skills. I like the format of the Mirror API because it’s a card style layout and that has worked really well with mobile learning products.
Smart watches (Galaxy Gear, Pebble to name a couple): Right now, these watches are seeming to fit a real nitch in the way that they provide notifications, which is unique compared to smartphones. Smartphones can typically provide you with an audible notification that something has happened (example, a sound when a text message arrives), but they require you to remove the phone from your pocket to check the notification and that interrupts the flow of what you’re doing. A smartwatch can provide that same notification and you can react by simply looking at your wrist. If you don’t need to act, then you don’t need to interrupt your flow. A passive indicator can be helpful in retaining attention and focus, which we know is key in learning. There’s lots more to think about when it comes to how smartwatches can be useful to the world of technology enhanced learning, so we’re just getting started.
There will be a lot to come on this topic and how we can further leverage wearable technology for learning purposes. In the meantime, check out this podcast and the article below to gain a perspective on wearables.
In the last post, I talked about some ways that you could use voice in mobile learning. I’m going to attempt to show some ways that you could use SMS or text messaging for mobile learning. There are lots of ways that SMS could be used. I liked the simple example that I saw when working for a previous company. An individual in the knowledge management group decided to simply pose questions to a group of students in a class. Each question was carefully crafted to be simple enough to answer in 140 characters, while still requiring real thought. This is sort of like running a forum through text messages, but the great thing about it is that… it’s mobile. Learners can tend to those questions anytime they have a free moment, without needing to be near a PC or needing to login to any kind of system. Our administrator (the knowledge management worker) was able to collect the responses and pass them to the instructor so he could see them and he loved the idea.
Beyond that, there are a lot of more advanced solutions that use more technology than the SMS text itself. I’ve used PollEverywhere in presentations and I’ve seen it used in the classroom. The PollEverywhere service allows your learners to respond to polling questions through SMS and a web interface for devices equipped with a modern browser. The beauty of a service like this is that you can see the results right away and so can your learners. Check out the 1 minute video on the site for an overview.
If you’re more inclined to tackle the technical side of all this, you can setup an SMS gateway. It’s not trivial, but you can configure your service to be flexible for your needs.
You can also setup a support number for learners. Learners could send texts to that number and receive feedback from an expert or a group of support personnel. Think about how you can support your eLearning and mLearning resources. The simplicity of text based communication is where it really shines.
We talk a lot about mobile learning in a general sense. Most learning professionals agree that it’s at least another tool in our arsenal and certainly could be very valuable to learners. But two questions come to mind when I reflect on my conversations with students and learning professionals:
Where does mLearning fit in?
What does mLearning look like?
I wouldn’t want to suggest that we only use a particular strategy for mLearning. Like all technology, mobile should only be used when it makes sense and helps your learners accomplish a learning task. But one place that you can really start to help your learners is within the task that the learner is performing. This means that you need to know what your learners are doing. I recently did a survey of my core group of learners within my previous company (I just moved to a new employer). The survey focused on a few things, but mobile tasks were one of the major areas. I wanted to know what the learners within my group were doing with their mobile devices… so I asked them, and I got some good answers. I did a session on this at the eLearning Guild’s latest online forum, and I found that while phone calls and email were the two biggest mobile activities performed by our learners, text messaging and web browsing/searching were right up there. These results may not surprise you, I guess I figured that communication would be one of the most useful functions of a mobile device! But knowing that people are using their mobile browsers, their voice capabilities and their text messaging capabilities allows us to think about how we could embed learning into those capabilities.
I’ll take a cut at the first of those in this post, and I’ll cover the others in subsequent posts. Let’s start with voice calls:
Voice calls – how can we support learning before a phone call takes place?
My ideas: Most of my learners had iPhones or Android phones. My first reflex is to use the browser. We know that those learners can use a WiFi connection on their device while making a phone call (provided that one is available). So we could look to build a simple interface to support those learners with their corporate phone calls by providing access to different learning resources that are designed to be easy to read and otherwise accessible to our mobile learners. I believe the simplicity of the interface and the content is key because the learner’s attention will be divided between their phone call and their attempt to view the resource. The content could range from immediate data to support the substance of the phone call to coaching suggestions that a learner could reference when talking to a client or even a checklist of things to cover during the call. You may say that some of these are straight performance support and not “learning”, but I am in favor of learning professionals owning all of that since we are the ones who know how to structure content for learning… why shouldn’t we be making the performance support content?!
Another option – Provide voice coaching to the person who is in the conversation. You could help learners by embedding actual coaching through voice to the learner. This strategy has been used to teach and coach help desk and support technicians for some time now and has shown itself to be effective in the field of customer support.
Another option –
Provide text messaging based question and answer services. Basically, a learner could be on a call and send simple text message questions to a system or individual. The individual or automated system on the receiving end would respond immediately with an answer. People use this method all the time when they are on a phone call with one person and they need information from another. I was recently on a call with one friend, who asked me what time my flight landed, I was visiting him in his city. I didn’t know, so I sent a text message to my other friend who bought the tickets since we were traveling together. I got an answer back during the phone call and was able to provide an answer. We could automate this model with any number of text-based Q&A systems (just do a search).
These are just suggestions, so feel free to comment on your thoughts and suggestions. In the next post, I’ll make some suggestions regarding email.
Yesterday, I posted about mobile learning and using the sensors on the device in your learning design. Tomorrow a great app will launch in Apple’s App Store, called Coach’s Eye, from TechSmith. Coach’s Eye is an app designed to help coaches, parents and teammates evaluate an athelete’s performance and provide feedback through video. Think about it as if you’re the commentator watching the game with the magic pen that writes on the screen. I had a chance to preview this application and I can tell you that it’s easy to use and provides something I haven’t seen in any other apps, the ability to review and slow down video so you can provide feedback in a structured way. The end product is a video that you, the coach, produce with your feedback.
Among other things, Coach’s Eye allows you to slow down video to highlight certain places for improvement. You can highlight by drawing a box, a circle or lines and the best part is that you can comment on the video to give verbal feedback. You can then send the video to the person you’re coaching so they can concentrate on areas to improve.
Once you take a look at this app, you’ll immediately see how useful it can be for an athlete. I personally used it already to start working on some improvements to my baseball swing. I intend to keep using the application for that purpose. However, I think this app can easily be used in the broader training world. Think about a scenario where you or a coworker are charged with performing a task. A simple example would be the use of a specific piece of equipment like a printer or even a piece of software. Coach’s Eye would be beneficial because you could record a procedure and highlight certain things along the way while also providing verbal direction to the user.
The best thing about Coach’s Eye is that the designers and developers took the approach of using the device’s sensors. They realized that a mobile device has both added functionality and limitations when compared to a desktop or laptop computer. And since a mobile device has a camera and can easily be manipulated to provide good video in any environment, why not leverage that strength to allow the user to do something other than consume the content of others… you actually create your own learning content with their application!
I give kudos to the developers at TechSmith for building a focused, easy to use application. Like a lot of good applications, they stuck to a simple, intuitive design and they make it fun with a colorful interface.
Disclosure: I do not work for TechSmith, and I don’t have any official affiliation with their company. I was able to get on a list of testers for Coach’s Eye. I believe the app and the concept of coaching through the use of mobile devices are both heading in the right direction.
Conversations about mobile learning are happening all over. One community asking questions about mLearning is the community of instructional designers who are wrestling with how to approach mobile learning. As ISD’s, our tendency is to provide the learner with the most information we can as long as we can find it to be relevant to the learning need. However, the prevailing knowledge we have about mLearning suggests that we provide less content, not more. But is that really the right way to approach it? Is the reality that we have to provide less content, or is it more a matter of structuring and access to the information that should drive our design decisions?
We make a lot of assumptions about mobile learners and their behaviors (i.e. they are traveling on a bus/train, they don’t have any time, and they’re not looking for a vast body of information just an answer to a simple question), but are those assumptions right? And even if they are right, do we know that users will always be in those situations and unable or unwilling to access more content and add to their depth of knowledge about the subject.
I don’t know all the answers to those questions, but I am of the mind that we can provide deeper knowledge to meet the needs of our “typical” mobile learner, AND support their possible desire to learn more about a topic.
I do think we should focus most on addressing the learner’s perceived immediate need. But I also think that we can provide more knowledge to deepen the experience if we think critically about the navigation and media we provide.
One example I can think of is a simple mobile learning application about driving a car. You could structure your navigation to make the basic, most immediately necessary content about steering, speed and how to use the turn signals available as the storefront to the application. You could also provide a set of short videos demonstrating how to do each of those activities. However, beyond that you could provide additional links and navigational components on each video page to give the learner an opportunity to see the inner workings of a steering mechanism or a link demonstrating how speed ratios effect braking.
My example is very basic and we know that a lot of complex content will have to be covered in mobile format. But I don’t think we should hold back on content that can provide depth, we simply need to think of how to allow the user to get to it without bogging them down with too many distracting choices that will inhibit the effectiveness of your learning product.
Any ideas about how you could structure your content for easy access to the most necessary information, while maintaining the learner’s ability to dive deeper?