Brief Introduction and Requirements
Inspect empowers inspectors and building managers with data storytelling by providing comprehensive tools and features to collect, manage, and analyse field data efficiently.
It enables inspectors to perform inspections using digital forms on mobile devices, eliminating the need for paper-based documentation. The software offers customisable inspection templates, allowing inspectors to tailor their forms to specific inspection requirements. Inspectors can also easily capture photos, videos, and audio recordings during inspections, ensuring accurate and visual documentation of findings.
One of the standout features of Inspect is its ability to transform collected field data into engaging reports. Inspectors can create professional reports incorporating rich media, such as annotated images and sensor data, to convey their observations effectively.
Key terminologies in Inspect
Before I proceed with the case study, here are some essential terminologies specific to Inspect:-
Spot: A spot is a collection of information about a particular point of interest in the floor plan. It can be the defect’s location, the defect’s type, the defect’s severity, or a compilation of defect images.
Asset: An asset contains key information about a building scheduled for inspection
Checklist: A systematic list of items to be inspected, ensuring thoroughness and consistency in the inspection process with templates in-app based on standard government-compliant building checklists
Report: System-generated summary of collected data, including findings, observations, and actionable recommendations
Despite being one of the key value propositions of the app, the current workflow is rigid. It expects the user to complete the report generation in one linear flow with little flexibility.
- When selecting spots for reporting, the user must select all in one go. Should the user decide to change areas/photos for reporting, he must abandon and restart the entire selection process again. This linear workflow creates unnecessary pressure/stress.
- There’s also no way to preview spot details or enrich spot photos during the selection. It can be frustrating to select pictures for reporting when all you can see are tiny thumbnails. The user is not able to select specific photos for reporting. There is also no way to zoom in and identify photos most recently taken.
- There is also no way to add photo remarks and annotate selected photos during the selection process.
- It is impossible to know the sequence of the items in the final report. It is troublesome to reorder spots with the existing workflow. Customisation to report templates is reserved for Backoffice (on the web), with little flexibility on report preview and details in-app.
The process can be very fluid when an inspector generates a report in the field. The inspector may show different data types depending on the report’s purpose. For example, to give a project status update for a specific timeframe, the inspector may only want to select defects (spots as we call them in Inspect) based on the date range.
Thus, the report generation flow should be as fluid as when you shop for items on an e-commerce app. Users should be able to curate and review shortlisted data for reporting.
To kick off the experiment, we decided on the epic story to guide the concept designs:
As a user, I want a flexible report-generation process that allows me to add, remove, enrich and review the report items freely without losing my selection before generating the final report.
Critical stages of the process:
- Hypothesis & Conceptualisation: Testing if the concept of “adding to cart” worked for an inspection app
- User Testing: 2 user interviews; Unmoderated testing with 8 internal users
- Iteration: Revised designs based on user insights as well as stakeholder feedback
- Final Design: Prepared the design file for discussion with the dev team
Hypothesis and Concept Design
We were inspired by the “add-to-cart” concept commonly seen in e-commerce. We needed to validate if the idea made sense for users of an inspection app. I mocked up crucial flows for user testing.
The scope covered in the prototype used for testing:
- Adding spots to the report
- Adding checklists to the report
- Adding multiple spots and checklist to the report
- Re-arranging the order of spots in the report
- Zooming in and enriching media in spots
- Removing photos from the shortlist
- Generating a report
Our proposed design lets users easily select one or multiple spots to add to their report by clicking the “+” button. The report generation process is fluid and dynamic, allowing users to continue collecting data in the field while shortlisting spots.
Users can shift the position of spots within the report “cart,” similar to how one can define the order of songs in a Spotify playlist. They can also remove spots, and hide or unhide specific photos or data for each spot during the review process.
The dynamic review process allows users to enrich spot data by zooming in to view photos and adding annotations that will automatically update in the “cart.”
To make the report editing process intuitive, we have allocated more text field space for required details, rather than confining them within a small pop-up modal with multiple confusing toggles for settings.
Armed with the prototype, I conducted user interviews with 2 of our in-house professional civil engineers and unmoderated testing with 8 internal users, including stakeholders from the commercial team who may not already have biases about the app.
Tools and Methodology
For the user interviews with subject matter experts, I’d prepared some questions about their existing workflow with Inspect and open-ended discovery questions around how they generate a report with field data. The key objective is to uncover pain points in their daily work.
I accompanied the short interviews with moderated testing of the proposed new report generation flow. For the moderated testing, I’d prepared a task menu with all the essential flows, and given the users’ scenarios, they had to complete the given task.
For the unmoderated testing, I designed the test using UX Army. I sent the testing link to selected internal users and scored the test based on 2 main metrics:
- Successful Task Completion
- SUS (System Usability Scale)
One of the methods I used to synthesise the test results was a Rainbow Spreadsheet for a visual representation of critical observations. It is easier to identify a pattern among testers.
Each participant were allocated a unique colour. The chart was based on the breakdown of the tasks and key observations made for each participant. I would then add the colours to the key observations with users who responded similarly. At the end of the synthesis, the goal is to identify a pattern with observations.
✅ Users were quickly onboarded onto the “Add to cart” concept for reporting. After getting the first visual feedback when items are added to cart, they seem to get the hang of it.
✅ Most users could locate the “report cart” on the top right of the global nav bar. The interactive counter on the cart icon helps communicate the idea of a “live” cart.
💡 The idea of reporting by report type is still not explicit enough with the dropdown design.
The video recordings demonstrate that users quickly grasped how to use the “Add to cart” feature for field reporting. They seamlessly added their items to the cart without any issues. Our design team was pleased to receive positive feedback as it validated the concept, which we were confident in.
However, user testing did reveal a limitation in the software design revealed a limitation. Inspectors can capture various types of data, such as images, building components, and sensor data for a defect. Due to how the data infrastructure was set up, users could only report one data at a time, whether a spot, photo, or checklist. This confused users, as they had to “check out” the cart multiple times based on their selected data types.
Iteration After User Testing
How to gracefully navigate a technical constraint: Guiding users to generate report by report type
To fulfil the ideal state where the user can do combined reporting of the different data types such as spots, checklists, and photos would mean substantial technical effort. As an interim solution, I found a way to explicitly show the user that they can only generate one report type at a time.
I tried out a few UI options to address the user confusion about selecting the data type before tapping the “Generate Report” button. In the initial design, I tried using a dropdown to show the user they can select the report type before clicking “Generate”. This proved too passive, users didn’t realise it was a mandatory action to proceed. In the new iteration, I tried using a brightly-coloured teal banner at the top to inform users and an alternative design using a side panel to indicate the different report types in the “cart”.
After a month of work on concept validation, user testing, and different design iterations, the feature, unfortunately, got put on hold. There were too many competing features on the product roadmap planned for the company’s annual keynote for 2022, which sadly did not get prioritised successfully.
As of writing in 2023, the app has grown quite a bit with many significant developments, including adding checklists to give users the tools to create and customise digital forms. The app is also currently going through an essential update in data infrastructure. This change in data infrastructure will open up opportunities for us to do combined reporting in the future.
An emerging trend from customer requests
The need for flexibility for reporting within the app would be growing in importance. I’ve been seeing more customer feedback around customisation for their reports — how the ability to select data to be shown is shown. The current solutions are primarily quick fixes, and we’ve been adding too many controls under advanced settings, which is becoming bloated with rules.
Despite the feature not making it to the final product roadmap, the development process resulted in many innovative ideas that could be implemented.
I enjoyed exploring this feature’s possibilities and taking it out on a test drive in the user testing sessions. I am fortunate to work on a design-initiated project driven by fundamental customer pain points after going through a proper heuristic review of the app. This highlights the design team’s dedication to continuously exploring new possibilities and delivering the best user experience possible.
Through this experience, I’ve grown my knowledge in conducting user interviews and testing sessions. I’ve also learnt to balance user and stakeholder feedback in guiding my design iterations — learning which problem to prioritise, in context with product constraint, was a valuable lesson.
Many factors contribute to the inclusion of a feature in the final roadmap. These include the company’s business objectives, the unique selling point, and customer value versus effort. I work best in an environment where the prioritisation process is collaborative and transparent between designers and product managers. Some product managers prefer a more instructive approach to owning the roadmap.