Sharing UX work can be tricky when you’ve been working in federal government or under corporate non-disclosure agreements. I’m going to discuss a project in general terms and hopefully you get enough meat from what I discuss here as to my experience and contribution in my two years at Enterprise Holdings. I worked for a couple months on a project for managing vehicle inventory. These activities were being performed adequately on existing hardware but there was an inevitable hardware replacement looming and a software update was non-negotiable.
|Project Name||Enterprise Vehicle Project|
|Tagline||Create replacement suite of apps for a set of handheld terminal programs that track vehicle inventory|
|Project Summary||A company-wide legacy device used to track vehicle locations and other metadata needed to be replaced with an Android-based handheld scanner. Application replacements and new applications were to be made to serve lot employees.|
|Date or Timeframe||March 2019 – November 2019|
|Tasks & Responsibilities||Develop with the vehicle team new Android applications to replace existing suite of inventory apps. Evaluate existing applications, establish updated workflows, retain speed, simplify while not creating additional screen complexity.|
|Design Tools / UX Methods||Sketch, Affinity Designer, Visio, Inkscape|
|KPIs / Analytics||Location saved, transaction speed, page load, transactions per day|
|Team / Collaborators||Mike Smick, John S, Patty M, Gene M.|
There were a few applications that we were tasked with porting over from a previous handheld device. The older handheld had been a long-used text-based inventory scanner. On first assumption one might think, “how archaic” imagining a green LCD with just a few lines of abbreviated text. However, one major advantage these text-based systems have is speed. Battery life and reliability are pluses. What could refresh faster for a user than a terminal display? What could be quicker to read than a common language of shorthand abbreviations? And a screen refresh of a few bytes compared to even the smallest asynchronous http request means that new services now exist and will be part of the wait time. It’s also a bit intimidating when you know somebody will be using both devices right next to each other and compare their responsiveness.
When we met about this objective we knew that the default behaviors or “happy path” were just scan and go actions. And we considered deriving the next default from the previous choice. That is to say, if you are marking a location, and in your next move would statistically be to submit the next sequential number, we might be able to default to the next number in sequence up or down. Problem though, there’s an inconsistency both in the activity throughout the day and the numerical designations across the world locations. They might not even be numbers. Then there’s something else too. Bad data entered accidentally means duplicated effort or worse. And we know a user running on ‘auto-pilot’ may just go with what shows up. An assumed value resulting in incorrect data makes the activity worthless. And in a fast paced environment cutting corners is always something you hope you don’t see but you know exists.
Deriving from previous data sounds good but doesn’t encourage the inattentive or corner-cutters to perform at their best
But it’s so simple right? Just a couple fields, a couple buttons. Ahhh but it never really is that on a continuum of work is it? Because we are building an activity that is one context among three to six other contexts in the application being built out incrementally, and we establish a paradigm with the first one they see. Also the user must be able to enter and exit this context or “loop” and switch to another once it becomes available. Then there’s also the user’s capabilities in this larger application depend on their security role and geo-location.
If you had all the time you needed, there may be ways to automate on their behalf. But if you had all the time in the world, you would prefer that each location had it’s own detection system, and homing capabilities. A roomba vacuum knows when it’s home. Can a car know what home it’s in, among millions and self-report? Doesn’t matter because your deadline is a lot shorter now then when you first started thinking about roombas. And there’s that feeling that all the ideas you had to make something more powerful, you’re really back to the first order, which is to make a repeated task as efficient as possible.
So we’re back to Scan, Update, Submit, Repeat (loop). And of course within that loop, allow the user to context switch or log out. And we get the refresh rate down to milliseconds, it can be imperceptible. The user will always update the location themselves. The user will type in the update because a dropdown is more searching. The user will be able to utilize this application with assisted technologies for sight and sound.
Sequences, loops and user permissions
If I were to offer advice to anyone working on big applications it would be this: ROLEPLAY as the user and at every stage of a transaction, understand what will happen when you make a mistake and want to backtrack. Remember, software exists because it’s intended to be used many times over and over. But the problem is, when constructing end points, people get caught up in whether a scenario will work, but in their requirements, fail to consider how to elegantly handle mistakes and backtracking before it’s too late. And if YOU can anticipate those unspoken and unrealized problems, you can save everyone a lot of headache.
So the idea below is to take variations of screens, lay them out, and play out the transaction. But the other thing to know is, the same transaction can often be carried out by users of different security permissions. And their views or capabilities will often change. A tier-1 user for example may have the ability to create and edit a transaction, but not delete. A tier-3 user might have several extra data views that they can move in and out, while still performing the same transactions as a tier-1, but they have more floating options buttons or an elaborate context menu.
Now here’s something fun too. SOMETIMES, a tier-3 user needs the ability to know visually that they are logged in under tier-1 level, and so the same screen might be given an alternate color scheme so they know immediately “I’m logged in improperly for this task.” The only way you will know about the need for this sort of hidden feature and how to handle it is to INTERVIEW THE USERS and their bosses. Hell hath no fury like a manager who lost her superuser privileges and can’t fulfill her daily obligations.
On screen alerts, confirmations and deliberate slowdowns
I’ve talked this topic to death but I want to just go over it succinctly. We spent a LOT of time hashing out what it mean for a user to get too comfortable inputting bad data to save time. If you had all the time in the world, you could write a special feature for some users who have a tendency to cut corners would be given more popup confirmations so they are encouraged to input good information even when it takes longer. And once that users passes a qualifying state, they could then unlock easy mode where the application assumes they are making correct choices.
The best analogy is when you take a multiple choice timed exam. As time runs out you realize, it’s better to at least answer questions and get a few extra lucky correct answers by choosing “C”. There are a subset of employees who just want to complete a transaction using whatever option will allow them to process it. And other people work too quickly with not enough coffee and after 10,000 repetitions they no longer read what’s in front of them and dismiss important information. How do you handle it? I can’t tell you that, but just know in the room, half the people will say they want more modal popups, and the other half will want less. And then a month later both sides might change their minds completely.
I worked on these applications for several months. Because a new team was getting spun up and this would be one of several hybrid web apps that would be worked on by different teams, I began getting a strong desire to build a web prototype in HTML and Material UI that would mimic what would end up being an angular or react app. This instinct proved to be valuable, because it guided me towards layout decisions that these teams that included some junior members could digest.
I provided my layout files for them to use and test the media query behavior. Later on I would have multiple discussions with the leads when some team members would be deviating from the design to ensure that these files would prove useful for grid behavior.
There was a point when the development was underway on the first application and I was doing the UX work on two subsequent apps. At this point I moved to working in vector software. Though only a few screens are shown here, I accumulated over a hundred screens, with many layout variations, responding to feedback and creating flows on screen. An effective way in the meetings was to create a few mobile screens on one Powerpoint slide and across one or two slides we’d have a good idea of one scenario. I’d illustrator multiple scenarios and provide them as their own slides so we could discuss that as questions came up.
The team collaboration went really well. I’m pleased to say I think my work set the devs off on good footing. The discussions about how the app should work and incremental release also went well. Speed of refresh was imperceptible. The first weekend it went out to a limited audience who had been getting the new devices over a few weeks and could finally use them, no reports of failures, 30,000 requests nothing notable in the logs. It worked it was fast and simple and set the stage for the subsequent activities over the coming months. Win.