BYOD, MDM, Consumerization (Mobile Strategies) Tips of 2013 – Part 1

20 12 2013

Consumerization, BYOD, managing mobile devices and the cloud have been hot topics for IT professionals this year, and there has been a lot to keep up on.

Mobile devices and operating systems kept the bring your own device (BYOD) trend going strong and essential questions about how best to handle mobile strategies, mobile device management (MDM), workers’ cloud use and more. Read up on the 2013′s  most popular tips:

Tip 1: iOS 7 features for IT

Apple’s iOS 7 came out this year and brought with it some new tools for managing mobile devices and applications. Managed open in, per app VPN (virtual private network) access, an improved Volume Purchase Program and an updated MDM protocol can all help IT administrators retain control over iOS devices.

Many exciting consumer-focused features were shown at Apple’s Worldwide Developer’s Conference, but some of the real benefits of iOS 7 come from Apple’s efforts to be the mobile enterprise manufacturer of choice. From mobile device management options to per app VPN access, iOS 7 offers quite a few enterprise features that IT should get to know.

MDM in Apple iOS 7

Open in management. This Apple iOS 7 feature gives IT the ability to control which apps workers can use to open and share documents and attachments. For example, if a user receives a Microsoft Word attachment in his email and wants to open it in a third-party app, IT will have approved certain apps — such as Quickoffice — that can open that document.

Easier MDM enrollment. IT can set up corporate-owned iOS 7 devices with all their MDM requirements right away, rather than sending out the devices to the employees and having them set them up. The less time and money it takes to set up devices, the more devices that IT can procure for the workforce.

Enterprise SSO. Single sign-on (SSO) allows workers to log in only once and gain access to all their apps, including corporate-developed and App Store apps. As the years go by and companies switch to being mobile-only, users will appreciate not having to enter their username and password everywhere.

Other Apple iOS 7 features

Per app VPN: This feature configures apps to connect to the virtual private network (VPN) upon launch. It’s another tool that should make IT happy, but there are questions around how per app VPN works in conjunction with an MDM system.

Many company-developed and third-party apps need to have code within them to be managed by MDM. The per app VPN feature may allow IT to manage iOS 7 devices without any special code, or perhaps every time users open an app, they’ll be prompted to sign into the VPN. Plus, there may be differences in security measures; some MDM tools sandbox and encrypt data in different ways, and most VPN applications just protect a URL or path for data to travel over securely.

Third-party app data protection: Data protection uses workers’ passcodes to build a strong encryption key so data is secured without any additional configuration by IT. Apple says that all third-party applications have data protection enabled by default, which means that data stored in applications downloaded from the App Store is protected by users’ passcodes until they unlock their iOS 7 devices. This Apple iOS 7 feature could ease the security fears of some IT departments, but questions remain about the level of encryption of data stored in the cloud.

Improvements to Mail: The native Mail experience keeps getting better. With the Mail app in iOS 7, users can view PDFs, sync notes with Outlook and organize smart mailboxes — that is, group messages that meet a certain criteria, such as those from a specific email address. But if the note syncing happens through iCloud, that could create a point of concern for IT.

Post to continue with other tips.





F# – Programming Language

13 12 2013

F# (pronounced F Sharp) is an open-source, strongly typed, multi-paradigm programming language encompassing functional, imperative and object-oriented programming techniques. F# is most often used as a cross-platform CLI language, but can also be used to generate JavaScript and GPU code.

F# is developed by the F# Software Foundation, Microsoft and open contributors. An open source, cross-platform edition of F# is available from the F# Software Foundation. F# is also a fully supported language in Visual Studio. Other tools supporting F# development include Mono, MonoDevelop, SharpDevelop and the WebSharper tools for JavaScript and HTML5 web programming.

F# originated as a variant of ML and has been influenced by OCaml, C#, Python, Haskell, Scala and Erlang.

More about F# at: http://www.tryfsharp.org/





Android Testing – Part III

22 05 2013

Also be sure to check an app’s effect on battery life. Many users expect a phone’s battery to last an entire day, at minimum. And with phones performing more and more tasks, battery life is stretched ever thinner. If an app sucks more than its fair share of power it will be ditched by users. To be sure an app isn’t a power hog, would recommend both a normal use test and an idle use test.

Normal use test: Start on a full battery and use the application for 6-12 hours and measure the battery level at the end of each ½ or 1 hour. You may use an automated testing tool to do this so as to keep the test running for the required time interval. This test will tell you how quickly your application drains the battery when in ‘normal’ use, with all the foreground and background features of the application running normally.

Idle run test: Turn off the screen lock and power saver modes. Then start on a full battery and keep the application running on its main, home or dashboard screen as appropriate, and measure the battery level at ½ or 1 hour intervals. This test will measure the battery drain due to such things as intentional or unintentional automatic screen refreshes, and due to the background threads or services running in your application.

5. Common Issues

In addition to the regular testing considerations, a handful of issues pop up across the Android world more regularly than we’d like. Be sure to test for these common bugs across multiple devices pre-launch.

Special Characters: If the app includes a search field or data entry form, test its special characters compatibility. Depending on the programming language, special characters can cause the field to choke. This is especially important if the app is going to be used internationally or with a native language that includes special characters (such as Spanish).

Long Strings: Long strings of characters are more of a fringe use case. Nonetheless, it is important to make sure an app can handle at least moderate length strings of characters. Let the type of data field – and the assumed typical entry – dictate an acceptable length.

Tap and Hold: Even if an app isn’t designed to support the long-hold copy and paste function or the tap and hold move function, make sure it isn’t confused by those actions either. Even if users don’t perform these functions intentionally, it’s very possible that they’ll get distracted and accidentally hold the screen for too long. You don’t want the app to freeze or crash as a result of this real-world scenario.

Virtual Keyboard: The majority of Android devices are touchscreens with virtual keyboards. If a user accidently – or intentionally – raises the keyboard on their device the app screen can distort or, worse, the app itself can become unusable.

These are a handful of common bugs Android users encounter every day, so being sure your app isn’t tripped up by them before launch can give you a leg up on the competition. When testing, remember to think like an end user and test how the app responds to potential real-world situations. Even if an app isn’t designed to support a function doesn’t mean a user won’t try – and be angry if the app crashes.

6. Android Security

It’s no secret that Androids are susceptible to malware – largely because of the open nature of Google Play and the presence of unmonitored, third party app markets. Couple the consistent malware reports with users’ increasing interest in privacy, and an accidental security slip up can be disastrous to your app’s success. At the very least, be sure an app successfully accomplishes these six key security factors:

  • Confidentiality: Does your app keep your private data private?

Integrity: Can the data from your app be trusted and verified?

Authentication: Does your app verify you are who you say you are?

Authorization: Does your application properly limit user privileges?

Availability: Can an attacker take the app offline?

Non-Repudiation: Does your app keep records of events?

It is also helpful to have white hat security experts attempt to manipulate at least the most common security vulnerabilities, such as accessing data due to unsafe storage or transmission practices, cracking inadequate encryptions and unlocking hardcoded passwords. If an app is easily hacked it probably will be hacked.

Finally, test an app’s access to device APIs (such as contacts, photos, camera or GPS). According to the tenants of the Open Handset Alliance (which Android is a member of), “An application can call upon any of the phone’s core functionality such as making calls, sending text messages, or using the camera, allowing developers to create richer and more cohesive experiences for users.” However, when stories about Path “secretly” accessing users’ contacts sparked insight into this common practice, users started becoming more concerned with apps accessing unnecessary personal data. To avoid incurring user ire, test that an app clearly prompts users to grant access permission during download or launch. With so much competition in the app market, it is easy for users to find a replacement that they feel does not unnecessarily invade their privacy.

7. Useful Insights

Testing is an art, so everyone does it a little differently. Still, the best testers aren’t afraid to learn new things and pick up new pointers. Here are a few tips to help you test and to keep you in the mind frame of the end user.

When capturing videos of an app under test, it’s useful to hold each action and leave each screen displayed a little longer than you would in normal use. This makes the video easier for test managers and developers to follow and possibly pinpoint which action went wrong.

Digging through Google Play reviews reveals which issues users hate the most. Here’s what a recent look tells us:

  • 40% complain about installation
  • 16% complain about performance
  • 11% write about app crashes
  •  3.5% report hangs or freezes
  •  2% complain about the UI
  •  1% had security or privacy issues

Bad reviews can kill an app before it even gets off the ground. Thorough testing of these six known problem areas can help you avoid poor reviews.

Emulators are helpful for early stage functional testing, but it is extremely important to test apps on real devices. A keyboard and mouse cannot adequately simulate touch screen usability. And some features, such as accelerometer response or location mapping, simply cannot be tested on a stationary emulator.

Though testing on Android presents a bigger challenge than most operating systems, it is not going away or simplifying any time soon. By knowing the challenges of Android testing, you can adequately address known issues, launch better apps and keep all your users – no matter what device or platform version they’re on – happy and satisfied.





Android Testing – Part II

7 04 2013

2. Screen Size and Density

In comparison to the extremely controlled nature of iOS, the combination of screen sizes and screen densities in the Android universe add an extra challenge to app testing. Helpfully, and despite there being more than 200 distinct devices, Android classifies all official devices into one of four screen sizes and one of four screen density classes.

  • Screen sizes: Small, Normal, Large, Extra Large
  • Screen Density: Low dpi, Medium dpi, High dpi, Extra High dpi

Since testing on all devices is nearly impossible, these classifications will give testers and developers a good idea of how the app will appear on devices within the same screen size/density category. A recent breakdown of active devices looks like this:

 

Low dpi

Medium dpi

High dpi

Extra High dpi

Small Screen

2.3%

 

2.5%

 

Normal Screen

0.7%

26.2%

58%

0.9%

Large Screen

0.3%

3%

 

 

XL Screen

 

7.4%

 

 

 

As you can see, the most popular devices are squarely in the normal screen size range and generally sport either high screen density or medium density. This data is updated fairly frequently on the Android Developers website, so check it often as more devices hit the market.

3. Platform Versions

Android supports and tracks ten platforms/versions ranging from “Cupcake” 1.5 to “Ice Cream Sandwich” 4.0.4 (I am slightly outdated). Not all devices support all platform versions and new versions are not released to all handset makers at the same time. Instead, new releases are dripped out in increments, often leaving users eagerly waiting.

Despite being the most recent version, Ice Cream Sandwich is only the third most used platform and still has not been rolled out to all devices. “Gingerbread” 2.3.3 continues to dominate Android devices with 64% of use and “Froyo” 2.2 comes in second at 19%. Ice Cream Sandwich comes in third by only 1% (over “Éclair” 2.1).

By testing an app exclusively on the latest version you will isolate an extremely high number of users. Conversely, if you do not update an app to work consistently on newer releases you may lose current users as they upgrade. With so many platforms present on Android devices it’s important to test apps on at least the top three versions – if not the top four or five. To pinpoint the most popular versions visit Android Developers Platform Versions page, which tracks devices active in Android’s Google Play market.

WHAT TO TEST

As with all mobile app testing, common functions should be tested on as many devices as possible to ensure consistency. However, Android’s device and platform combinations present new challenges – a feature that functions perfectly on one device may cause a bug on another. In addition to normal testing considerations, there are a number of recurring issues that commonly crop up on a variety of Android devices. These issues are prevalent enough that they should be added to test cases on as many devices as possible.

Common Considerations

These functions may seem like no-brainers when it comes to testing, but skipping one can spell doom for an app.

a. Registration & Login: Testing should make sure the registration and login processes are intuitive, easy to complete and functional from start to finish. It is important to have testers on a variety of devices complete the entire registration and log-in process to ensure everything runs smoothly and no error messages occur.

b. Menu Options: Often times, menu options can be difficult to access and decipher. Make sure that menu items like Help, About, etc. are easy to find and navigate. This is especially important to test on a variety of screen sizes and with real users, since fingers are much larger than a mouse on an emulator.

c. Keys: Any problems related to scrolling, text selection, the back button, etc. are bound to lead to trouble, so make sure your key functionality is clear and consistent. Also, be sure to check that the app will function consistently using both a physical keyboard and touchscreen.

d. Interruptions: How does the app behave when the device battery is at full strength, medium strength and low strength? What about if the user gets an incoming call or text? If there is another app running in the background? These are all real life scenarios that users are going to encounter. Don’t let them take down your app. (Crash logs are particularly helpful in diagnosis if other events are adversely effecting an app.)

e. Error Messages: Your error messages should be clear, concise and actionable. “Error 12a26q” may make sense to the developers, but it doesn’t help users know what went wrong or how to fix it. Make your error messages easily understandable and you’re a step ahead of virtually all mobile applications on the market today.

f. Landscape v. Portrait: An app’s functionality and usability should not be affected when changing from portrait to landscape mode. Test to be sure buttons, fields and menu options are easily accessible and functional in both situations.

g. Settings: Change a device’s settings and repeat necessary tests to ensure an end-user’s custom settings won’t affect the app’s performance.

Also be sure to check an app’s effect on battery life. Many users expect a phone’s battery to last an entire day, at minimum. And with phones performing more and more tasks, battery life is stretched ever thinner. If an app sucks more than its fair share of power it will be ditched by users.





Android Testing – Part 1

9 03 2013

This is the first part of 3 series of posts

Android Testing

Android app testing is complicated by the fact that it has without debate the most complex array of handsets, versions and carriers of any mobile platform available. And unlike more closed systems,each Android device presents its own set of challenges. But being challenging is no excuse for limited or poor testing.

Despite encompassing a large number of devices,there are a few things that can and should be tested across the field.

It’s especially important to test Android applications on as many devices as possible, because something that works perfectly on one device might cause a bug
on another. End users will use a variety of phones, so the apps need to work consistently on a variety of phones.

To help achieve that cross device consistency, I would be detailing couple of focus areas that are essential to Android success.

This post and the next couple of them will highlight tips for testing within the Android Matrix, specific issues to test for and, finally, how to get testing done.

1.Handsets and Network Carriers

The most obvious part of the Android matrix is the sheer number of devices sporting the operating system. Complicating this ever growing figure is the number of handset manufacturers and carriers that participate in the Android universe.
 
According to the official Android website, Android devices are available in 25 countries. Worldwide, 23 manufacturers produce Android phones and 63 carriers support them on their networks. Globally, there are roughly 250 officially recognized Android handsets currently on the market (not taking into account platform versions or custom skins). Narrowing the scope to the United States only and there are still around 100 different devices produced by 15 manufacturers and supported by seven carriers. Nearly 20 of these devices include a physical keyboard, while the rest are touchscreen. And a special mention of the Samsung Galaxy Note, which not only has a unique screen size, but also has a stylus (unlike any other Android device). Because each device has its own specs encompassing physical design and custom UI attributes testing coverage should include as many devices as possible.

Note that Android apps can be accessed on even more devices, including on the cheaper “low end” phones that are beginning to appear in many of the emerging markets. These phones can only support non-data-intensive apps and require a completely separate round of testing. Many testing initiatives have not begun addressing this new category of phone, but it will be prudent to keep an eye on the trend.

 

 





Google Summer of Code

3 03 2013

What is Google Summer of Code?

Google Summer of Code (GSoC) is a program that matches  mentoring organizations with college and university student developers who are paid to write open source code. Each year, Google works with many open source, free software and technology-related groups to identify and fund proposals for student open source projects.

GSoC pairs accepted student applicants with mentors from participating projects. Accepted students gain exposure to real-world software development and an opportunity for employment in areas related to their academic pursuits. In turn, participating organizations are able to identify and bring in new developers more easily. Best of all, more source code is created and released for the use and benefit of all; all code produced as part of the program is released under an open source license.

This program has brought together thousands of students and mentors from over 100 countries worldwide. At the time of writing, over 200 open source projects, from areas as diverse as operating systems and community services, have participated as mentoring organizations for the program. Successful students have widely reported that their participation in GSoC made them more attractive to potential employers and that the program has helped greatly when embarking on their technical careers.

Goals of the Program

The program has several goals:

  • Get more open source code written and released for the benefit of all.
  • Inspire young developers to begin participating in open source development.
  • Help open source projects identify and bring in new developers.
  • Provide students the opportunity to do work related to their academic pursuits during the summer: “flip bits, not burgers.”
  • Give students more exposure to real-world software development (for example, distributed development and version control, software licensing issues, and mailing list etiquette).

A Brief History of Google Summer of Code

Google Summer of Code began in 2005 as a complex experiment with a simple goal: helping students find work related to their academic pursuits during their school holidays. In GSoC’s first year, 40 projects and 400 students participated. In 2010, the sixth Google Summer of Code wrapped up to the best results yet – more than 89% of the 1,026 student participants in the program successfully completed their projects. Best of all, most of the organizations participating over the past six years reported that the program helped them find new community members and active committers.

See the appendix for a more extensive history of the program.

Participant Roles

There are four roles in the Google Summer of Code program:

Program Administrator: Program administrators are employees of Google’s Open Source Programs Office who run the program. These folks do a variety of tasks: select the participating open source projects each year, create and analyze the program evaluations, administer the program mailing lists, ensure that participants are paid, and send out the all-important program t-shirt. Program administrators do not select which student proposals are accepted into Google Summer of Code.

More broadly, program administrators provide useful advice to both new and seasoned participants in a variety of areas, relying on their experience with the program and mentoring process. Not sure how to handle a disappearing student? Don’t know which mailing list has the latest information on payments? Wondering how to best improve your organization’s application for the program? Find a program administrator and ask away!

Organization Administrator: Org admins are the “cat herders” for GSoC open source projects. These people submit the organization’s application to participate in the program to Google, ensure that mentors fill out evaluations in a timely fashion, and generally organize their project’s participation in GSoC. The org admin acts as Google’s go-to person if any issues arise. There are also some trivial administrative tasks in GSoC’s online system that can only be completed by organization administrators, all of which are enumerated in the system documentation. Some org admins also mentor students during GSoC, and that’s perfectly fine; it is just highly recommended that folks know they have enough time to execute both roles simultaneously.

Org admins are the final authority about matters such as which student projects will be accepted and who will mentor whom. On the social side, if a mentor and student have difficulties communicating or making progress, an org admin will often step in as a neutral party to help the two work together more effectively. Org admins also help track down disappearing participants, whether mentors or students.

Mentor: Mentors are people from the community who volunteer to work with a student. Mentors provide guidance such as pointers to useful documentation, code reviews, etc. In addition to providing students with feedback and pointers, a mentor acts as an ambassador to help student contributors integrate into their project’s community. Some organizations choose to assign more than one mentor to each of their students. Many members of the community provide guidance to their project’s GSoC students without mentoring in an “official” capacity, much as they would answer anyone’s questions on the project’s mailing list or IRC channel.

Student: A student participant in GSoC is typically a college or university student;  the only academic requirement is that the accepted applicants be enrolled in an accredited academic institution. Students must also be at least 18 years of age in order to participate. Students come from a variety of academic backgrounds, and though most students are enrolled in a Computer Science program there is no requirement that they be studying CS; past student participants in GSoC have come from disciplines as varied as Ecology, Medicine, and Music.

Students submit project proposals to the various organizations participating in GSoC. The organizations select which student proposals they would like to see funded by Google. Many student participants have gone on to become important members of the open source community. Many students have also gone on to become mentors and even org admins for the program.

Program Structure

All of the program rules are enumerated in the GSoC FAQs each year. Provided all of the rules regarding eligibility for the program are followed, Google takes a fairly hands-off approach to GSoC. Each organization structures its participation in GSoC in whichever way makes the most sense for its technical and community needs.

Organization Applications: The GSoC program is announced each year on the Official Google Blog (http://googleblog.blogspot.com) among other places, and this announcement provides application deadlines for projects. Each organization must apply to participate. The questions asked in the organization application are published in advance and linked from the Program FAQ. Organizations usually have one week to apply for the program. Following receipt of applications, Google’s program administrators select which organizations will participate in that year’s Google Summer of Code.

Student Applications: Students are encouraged to begin talking to the participating organizations as soon as the list of accepted organizations is published. Prior to the opening of applications, it is important to take some time to talk to potential student applicants. This helps them refine their ideas so that they will produce a better quality proposal. Each organization is asked to provide a proposal template, but the best student applications go far beyond the template and an organization’s ideas list. Students are given at least a week to complete their applications.

Following the student application deadline, organizations begin reviewing the proposals they received. During the review phase, organizations maintain an open dialogue with their student applicants, asking them to refine their proposals. They may also conduct further interviews to determine which students are most likely to be a good fit for the community and work required. Over the course of several weeks, each organization prioritizes its list of proposals. Google lets each organization know how many student proposals it will fund, and organizations select their top proposals.

Sometimes a student has proposals accepted by more than one organization. Google leaves it to the organizations and the student to decide which organization the student will work with during the course of the program. While the organizations are not required to involve the student in the decision process, it is good practice to take the student’s preferences into account.

Community Bonding Period: Before students are expected to start working, there is a six-to-eight-week period built into the program to allow them to get up to speed without the pressure to execute on their proposals. During this time, students are expected to get to know their project communities and participate in project discussion. During this time, students should also set up their development environments, learn how their project’s source control works, refine their project plans, read any necessary documentation, and otherwise prepare to complete their project proposals. Mentors should spend this time helping their students understand which resources will be most useful to them, introducing them to the members of the community who will be most helpful with their projects, and generally acculturating them.

Start of Coding: Start of coding is the date the program officially begins; students are expected to start executing on their project proposals. At start of coding Google provides an initial payment to the student, around 10% of the overall stipend. At this point, students should begin regular check-ins and regular patch submissions.

Midterm Evaluations: Approximately halfway through the program, Google requires that mentors submit evaluations of their students’ progress. If the project is not proceeding effectively, it is discontinued and the student is dropped from the program. Students who receive a successful evaluation from their mentors continue working on their projects and receive a second program payment, approximately 45% of the overall stipend. Google also asks students to submit an evaluation reviewing their project to date, their mentor’s and organization’s performance, and any obstacles to their progress. Google may also ask org admins and mentors without students assigned to submit a general evaluation of the program during this phase.

Because software development is an iterative process, the original project plan must often be reworked and new milestones set. Directly following midterm evaluations is the perfect time for mentors and students to review progress to date and to reset goals for the project as needed.

Pencils Down: At the final deadline for coding, students are welcome and encouraged to continue work on their projects, but only work done before the “pencils down” date can be evaluated. Google suggests that all work be complete about a week earlier to give the student time for last-minute improvements and corrections, as well as preparing their work for delivery.

Final Evaluations: Final evaluations should be based only on work the student has completed during the program. If the project goals have not been met to the mentor’s satisfaction, the student is dropped from the program and receives nothing more from Google. As with midterm evaluations, students are asked to submit an evaluation of their overall success. Google will ask all participants from each organization to submit an evaluation of the overall success of GSoC.

Post Final Evaluations: Students who successfully complete their final evaluations are asked to submit a code sample to Google. These students then receive the final program stipend payment, a certificate of completion and a truly spiffy t-shirt. All program mentors and org admins also receive a t-shirt.

It’s a goal of Google Summer of Code that the student participants stick around long after the program has ended and continue contributing to their project communities. Great mentors continue working with their students to encourage them to do so. It’s also customary during this time for organizations to publish a post-GSoC wrap up report. Mentors and students take a well-deserved break, but energetic organizations begin planning for the next GSoC during this time.

For more details, please visit: https://developers.google.com/open-source/soc/

 





Mobile web content adaptation techniques

5 01 2012

Found an interesting article this morning.

Mobile web content adaptation techniques.








Follow

Get every new post delivered to your Inbox.

Join 26 other followers