The way I see it, if you want the rainbow, you gotta put up with the rain.

Project Team RIPA
7 min readMay 22, 2021

Dolly Parton

Well —that heading has a pretty tenuous link to our latest blog. Best we can do is that it’s been a stormy weather front outside all week, and perhaps we’re all feeling rained under with the workload as we work towards our private beta rainbow.

So this blog is looking back over the sprint that ended 21 May 2021.

It’s been yet another busy sprint. We held a session with some of our users who’ve registered an interest to submit an LDC (Lawful Development Certificate) application through the tool when we start out private beta trialling. They seemed really positive and interested so that was great for the team to see.

The non planner testing that we held for the RIPA tool went well, we had a good number of volunteers from the partners — Lambeth, Buckinghamshire, Southwark and the MHCLG — all providing a selection of officers to try the tool with dummy, pre-planned applications.

We had a follow up session with them, and whilst some of their feedback will be available to view via the spreadsheet collating the feedback widget on the tool’s site, for the benefit of readers of this blog who are interested in reading some of the feedback — we’ve dropped the feedback received by email into this spreadsheet

Some of the feedback we received by email included screenshots to accompany their comments, a selection of which we’ve pasted below:

Feedback: Other owners — doesn’t allow you to pick someone at the same address automatically — have to manually enter their details but then address gets error message if not all boxes filled

Screenshot relating to filling in manually other owner details

Feedback: Data to check at end has loads of duplicated entries and had answers to questions I wasn’t asked (one about a sports field?) Also odd coding shown

Screenshot relating to checking your responses at the end of the process

Feedback: Isn’t this the same question?

As you can see, our volunteers have really been helpful in giving us more insight into their experience of being given a dummy scenario with associated plans and drawings and being asked to submit that scenario into the tool. It’s not completely ‘real life-like, because a homeowner would have a much better understanding of what their application was about and have a deeper knowledge of their own property, such that some of our non-planner queries would quite possibly drop away.

In our regular design and user research catch ups, as well as our fortnightly Omega sessions, we had lots to talk about from the latest round of User Testing.

Currently, our private Beta is focused on Lawful Development Certificates (LDC). In an earlier blog we explained this type of application — but for the benefit of those who didn’t read that, very briefly, an LDC is a certificate that an applicant can apply for where they are planning to do work and/or have carried out work that wont or didn’t necessarily require planning permission, but where they want(ed) the assurance that they have permission for that work and/or structure to be in place. In such cases they apply for the certificate — proposed, if it’s work they have not constructed as yet, and ‘existing’ if they’ve already done the works. This link here https://www.gov.uk/guidance/lawful-development-certificates provides more guidance for those interested to read more on this type of application.

One thing that came out of the Design and User Research catch up during this sprint was to move the point at which questions about the ‘Materials’ section appeared. The order had previously been after the ‘About You’ section. But feedback from testing made it clear this was confusing users.

Toby of Lambeth commented that it could be that, as applicants reached the ‘About You’ section having completed the ‘About the Project’, they get a perception that they’re coming to the end of the process. So then to find, having answered questions about themselves as applicant, that they are then presented with a further set of questions about the property in the form of questions about materials, they feel disheartened. We all agreed that the ‘About You’ would be best coming after all of the questions around the property, materials etc., had been answered. They would then move on to the payment section, and this would create a nice linear feel to the process and feel more logical to the user.

We’ve had quite a lot of feedback on the use of tenses in our user testing for our LDC tool. This is in part due to the fact that we are dealing with both potentially proposed applications as well as existing. One applicant who was trying an LDCE (i.e. works that had already been completed) journey, found issues with the tenses in the draw site boundary area of the tool. This was an area where we’d not thought the use of tenses could be problematic. The participant was confused by text that appeared when talking about the site outline in particular that:

“…. The site outline must include all the works, plus any areas that will be cordoned off during the works, and any areas required to gain access during the works”

Use of the ‘during the works’ phrase confused the participant, who pointed out that they had already done the works. This may be something our soon to start Content Designer will be able to help us with, suggesting a way to make the wording easier for users to understand the variation in tenses where they appear.

Another interesting conundrum came around bay windows. Some bay window can be classed as having their own roof, some could even protrude far enough out to be classed potentially as an extension. Participants weren’t always clear if they had to include a separate roof or not, and if this didn’t need to be ticked, but they did tick it, further issues could arise further along in the process. Whilst only a tiny minority of applications contain works to a bay window, when they do contain them — they can be hugely problematic. We agreed that to improve user experience, a separate flow for bay windows would need to be built — although perhaps, at this stage, this isn’t an mvp (minimum viable product) requirement for our private beta.

We also agreed that at present, the check your answers page is very wordy and seems to have a lot of repeats. This is on the mvp list.

One reason for the repeating information is the automated and repeated ‘check your answers’ responses that appear. So for instance, a user may have answered a question but be unaware that the question has also been answered again, later on in the journey, in an automated way arising from responses they’ve made to other questions.

We agreed for mvp — we will add a comment at the top of the page explaining that there are automated responses that may appear to duplicate information — and on the roadmap develop a way of removing duplication and only show unique responses.

This led to the discussion that one way to remove that duplication would be to tag answers that have been automatically generated, and then these can be removed from the final ‘check your answers’ page.

Benefits Case

The team working on the benefits case had various meetings over the sprint including meeting up with Jamie from the MHCLG to re-go over the metrics they’ve identified for monitoring.

Kev of Buckinghamshire has developed Power BI reports so that we can access real time information. They also discussed ‘Clockify’ a tool for time recording which they’d attended a demo for. The view was that whilst it was good, it was expensive for large groups — but perhaps it could be used to capture agent and applicant data. It could also be used as a live feed so that we can see changes in real time.

They also discussed the need to start baselining. Whilst that private beta will not necessarily provide useful data in so far as it might actually show worsening performance, the view was overall it would be good to capture now so we can show how it went for us as we went through the process for our reporting purposes.

When discussing the metrics work with the wider project team, one comment was raised around whether the estimations for the impact of this work had included a reduction in officer reporting time as a potential outcome for the business case. At present, some reporting of data for performance monitoring from the existing systems can take at times as much as a week a month to produce. So we agreed this was worth adding to metrics we record. We also agreed we needed to create a joint metric assigned to the RIPA and BoPS Back Office Planning system — see bops.digital for more) projects to capture the time saving in officer reporting and assessment from the improved data that RIPA produces as well as the improved layout and intuitive design of the BoPS tool….

And there dear reader (if you’re still with us…and thank you if you are!!) we will end. We only meant to write a quick and short update, but so much happens in a sprint that it’s hard to even capture a mere fragment of that, which the above you can be assured is. Still, today is stormy weather outside — and forecast again for tomorrow, so hopefully this blog will provide an interesting read to those of us confined to home….

Next Show and Tell — 4th June — 10.45 am (www.ripa.digital)

--

--

Project Team RIPA

We’ve made it to Beta. An MHCLG funded project, led by Lambeth with five partners — Buckinghamshire, Camden, Lewisham, Northumberland and Southwark