Sharing UX work can be tricky when you’ve been working in federal government or under corporate non-disclosure agreements. I’m going to discuss a project in general terms and hopefully you get enough meat from what I discuss here as to my experience and contribution in my two years at Enterprise Holdings. I worked for a couple months on a project for managing vehicle inventory. These activities were being performed adequately on existing hardware but there was an inevitable hardware replacement looming and a software update was non-negotiable.
There were a few applications that we were tasked with porting over from a previous handheld device. The older handheld had been a long-used text-based inventory scanner. On first assumption one might think, “how archaic” imagining a green LCD with just a few lines of abbreviated text. However, one major advantage these text-based systems have is speed. Battery life and reliability are pluses. What could refresh faster for a user than a terminal display? What could be quicker to read than a common language of shorthand abbreviations? And a screen refresh of a few bytes compared to even the smallest asynchronous http request means that new services now exist and will be part of the wait time. It’s also a bit intimidating when you know somebody will be using both devices right next to each other and compare their responsiveness.
When we met about this objective we knew that the default behaviors or “happy path” were just scan and go actions. And we considered deriving the next default from the previous choice. That is to say, if you are marking a location, and in your next move would statistically be to submit the next sequential number, we might be able to default to the next number in sequence up or down. Problem though, there’s an inconsistency both in the activity throughout the day and the numerical designations across the world locations. They might not even be numbers. Then there’s something else too. Bad data entered accidentally means duplicated effort or worse. And we know a user running on ‘auto-pilot’ may just go with what shows up. An assumed value resulting in incorrect data makes the activity worthless. And in a fast paced environment cutting corners is always something you hope you don’t see but you know exists.
Deriving from previous data sounds good but doesn’t encourage the inattentive or corner-cutters to perform at their best
But it’s so simple right? Just a couple fields, a couple buttons. Ahhh but it never really is that on a continuum of work is it? Because we are building an activity that is one context among three to six other contexts in the application being built out incrementally, and we establish a paradigm with the first one they see. Also the user must be able to enter and exit this context or “loop” and switch to another once it becomes available. Then there’s also the user’s capabilities in this larger application depend on their security role and geo-location.
If you had all the time you needed, there may be ways to automate on their behalf. But if you had all the time in the world, you would prefer that each location had it’s own detection system, and homing capabilities. A roomba vacuum knows when it’s home. Can a car know what home it’s in, among millions and self-report? Doesn’t matter because your deadline is a lot shorter now then when you first started thinking about roombas. And there’s that feeling that all the ideas you had to make something more powerful, you’re really back to the first order, which is to make a repeated task as efficient as possible.
So we’re back to Scan, Update, Submit, Loop. And of course within that loop, allow the user to context switch or log out. And we get the refresh rate down to milliseconds, it can be imperceptible. The user will always update the location themselves. The user will type in the update because a dropdown is more searching. The user will be able to utilize this application with assisted technologies for sight and sound.
I worked on these applications for several months. Because a new team was getting spun up and this would be one of several hybrid web apps that would be worked on by different teams, I began getting a strong desire to build a web prototype in HTML and Material UI that would mimic what would end up being an angular or react app. This instinct proved to be valuable, because it guided me towards layout decisions that these teams that included some junior members could digest.
I provided my layout files for them to use and test the media query behavior. Later on I would have multiple discussions with the leads when some team members would be deviating from the design to ensure that these files would prove useful for grid behavior.
There was a point when the development was underway on the first application and I was doing the UX work on two subsequent apps. At this point I moved to working in vector software. Though only a few screens are shown here, I accumulated over a hundred screens, with many layout variations, responding to feedback and creating flows on screen. An effective way in the meetings was to create a few mobile screens on one Powerpoint slide and across one or two slides we’d have a good idea of one scenario. I’d illustrator multiple scenarios and provide them as their own slides so we could discuss that as questions came up.
The team collaboration went really well. I’m pleased to say I think my work set the devs off on good footing. The discussions about how the app should work and incremental release also went well. Speed of refresh was imperceptible. The first weekend it went out to a limited audience who had been getting the new devices over a few weeks and could finally use them, no reports of failures, 30,000 requests nothing notable in the logs. It worked it was fast and simple and set the stage for the subsequent activities over the coming months. Win.