Rocket Pro Assist: Analyzing Underwriter-Broker Communication for Automation Opportunities
Rocket Mortgage | UX Research Intern
May 2025 - August 2025 • 8 week timeline • UX Research Lead
Problem
Mortgage brokers frequently call underwriters for routine questions that require minimal effort to answer. These calls disrupt both teams’ workflows and limit brokers’ ability to self-serve within Rocket Pro. Rocket Assist (the chat bot) lacked the automation and flexibility needed to address these common inquiries.
Goal
Identify opportunities for Rocket Assist to automate common broker inquiries and reduce unnecessary phone calls.
Tools:
Qualtrics
Excel
Figma
dScout
Azure DevOps
Methods:
Surveys
User Interviews
Contextual Inquiry
Literature Review

Project Impact

Uncovered how instant responses to questions via the chat bot could save brokers and underwriters 2-5 minutes per phone call made to the other team.

Identified the frustration underwriters face when they answer phone calls from brokers that either require simple answers or were irrelevant to the underwriters’ roles.

Delivered recommendations to enhance Rocket Assist’s flexibility, automation, and usefulness across AI comfort levels, resolving ambiguity in the product roadmap.

Combined with my LLM research, this research helped influence a broader strategy shift for chat-based AI products within Rocket Pro.
Research Deep Dive
Why This Research?
In August of 2025, the Rocket Pro product team implemented Rocket Assist, a consumer-facing chat bot into the Rocket Pro broker platform. It drastically improved the broker user experience by allowing them to submit support tickets directly in the portal, rather than through a separate application. This shortened the time needed and effort for brokers to request assistance, boosting user satisfaction. This prompted the product team to ask what more should the chat bot be able to do?
My research focused on one specific area of the broker experience that we already knew ate into both brokers’ and underwriters’ time: brokers calling underwriters when they ran into snags with a loan. We knew this was a pain point because of the following data gathered by a team of guerrilla researchers brought into to observe underwriters:
1.6X
Work capacity increases per team if we reduce the average time underwriters spend on a call with brokers by 1 minute
1/3rd
Call volume reduction if we eliminate calls underwriters have to redirect to other teams, status updates, and document requests
7,283
Emails that coincide with a phone call per month would also be eliminated by an alternative means of question resolution
User Research Process and Plan
During a discussion with the Rocket Pro product managers they mentioned that, even if we already had an idea of the categories that broker questions for underwriters fell into, we still didn't know exactly what questions were being asked, or how much effort it actually took the underwriters to respond to them.
The need for specifics and interest in the effort required to respond was incredibly important because we were looking for an early, "low-hanging fruit" opportunity to expand the capabilities of Rocket Pro Assist. As we continued discussing the research, I built the following visual to get the team on the same page:
I proposed prioritizing these frequently asked, lower-effort questions because they would introduce the least room for error, allowing us to focus on the long-term strategy of building trust with brokers in our AI-powered systems. This was important for us because these tools would be among our first AI implementations for mortgage brokers, and first impressions can be critical in user experience.
Further, we hypothesized that the lower-effort questions would also typically require fewer steps or less problem-solving. This would mean focusing on simpler processes on the underwriters' end that require less nuance and interpretation for the AI provide insights on. By cross-referencing the effort required to respond to the frequency of the questions being asked, we can ensure the questions we target are actually relevant.
Research Questions
After honing in on what we wanted to learn from this research and analyzing the data we had on hand, I distilled our interest into 3 research questions:
1.
In specifics, why do brokers reach out to underwriters via phone calls?
2.
What are the most common broker inquiries that require the least effort from underwriters to solve?
3.
How receptive are underwriters and brokers to automating the response to some common inquiries?
I proposed adding the third research question about receptiveness to automation largely because we lacked insights into broker and underwriter trust in AI. If our goal was to strategize so that Rocket Pro Assist was taking the common, easy questions off underwriters' plates to build trust, then we needed to actually benchmark what that trust looked like. We set the team up to get additional insights into underwriters' perceptions of AI before a full implementation of Rocket Pro Assist, compared to after. It allows us to understand how our AI products compare to the general sentiment of our broker users, and whether confusion and frustration from brokers increased or decreased with the new releases.
Research Plan
I opted for a combination of qualitative and quantitative research methods. By keeping the survey mostly quantitative, we could more objectively describe the frequency and effort of different broker inquiries. The qualitative approach with broker interviews allowed us to gather more context while validating the survey’s findings.
Desk Research
Analyze the data gathered from the team that observed underwriter workflows to uncover what data we already have
Contextual Inquiry
Spend time shadowing underwriters to learn about what processes look like on their end and observe their interactions with brokers
Survey with Underwriters (130 participants)
Conduct a survey with Rocket’s underwriters to get a broad view of the common questions brokers call for, the effort needed to resolve them, and any concerns or confidence they have in AI to handle these inquiries.
Interviews with Brokers (6 participants)
After analyzing the data gathered from the underwriter surveys, conduct interviews with brokers to validate the findings, learn their perspective about having to call underwriters, and uncover their perceptions of AI.
Participants
Our first thought when discussing the interest in this research was that we would talk to brokers and find out why they call underwriters. After taking a moment to digest the information and think about our research plan, we quickly decided to focus the majority of our research on underwriters' experiences answering broker phone calls.
This was largely because underwriters have a higher-level overview of the questions brokers call them for. If we were to ask a single broker why they call an underwriter, we are not capturing a trend, just the behavior of an individual.
Rather than just gathering data from brokers, we could focus on studying underwriters to identify the questions brokers ask. We could then validate this data with a smaller group of brokers.
We moved forward with this because:
Underwriters also understand the effort necessary to respond to brokers
Brokers are harder to recruit and often require greater incentives.
There are different motivations behind phone calls for underwriters and brokers.
The roles of brokers and underwriters are two sides to the same coin; what you change for one workflow will affect the other.
Thus, we decided to focus on underwriters as the primary participants, and consult a smaller group of brokers as secondary participants who could give us insights while validating the initial findings.
After identifying the value in having 2 participant groups for the study, with insight from the team, we also decided to pursue a mixed-methods approach that would maximize the impact of the research. This made the most sense because we already had the general topics that the phone calls from brokers fell under from the earlier guerilla research team. What we needed to know was largely quantitative insights into the specifics of the calls:
How did these topics compare to one another in terms of call frequency?
When calling, what are the specific questions being asked, what topics do they fall under?
How much effort is required from brokers to provide a response to these questions?
We also wanted to gather some quantitative data into the underwriter perception of AI and their faith in AI to provide assistance before the brokers make a phone call. Meanwhile, we could conduct qualitative research with brokers through user interviews to gather additional context and cross-reference the insights from the underwriters directly with brokers.
Research Execution: Cross-team Collaboration
A major learning I took away from this project was the value of seeking out other team members to inform and progress the research. Keeping this work visible in meetings and bringing it up with team members allowed me to constantly find new ways to push the project forward, enhance my approach, and bring it to a larger audience.
Underwriter Survey
Coordinating with the operations team, we sent our Qualtrics survey to a team of 240 underwriters. We prioritized making the survey brief and punchy to maximize completion rates, ensuring that the more open-ended qualitative questions were targeted and deliberate. Before sending the survey out, I verified the logic and meaning of the questions with underwriters to ensure that I was asking questions from the perspective of their work, and not what I think the perspective of their work is.
Underwriters were provided with question categories based on past operations data and an open-ended option, then asked to pick their top four most common broker question categories. We then asked them to rank these categories in terms of effort, based on time taken and depth of knowledge needed to respond. Then, we had them provide examples of the 3 most common questions they receive from brokers, and categorize them.
Building and analyzing the survey results was a unique challenge for me, and an invaluable learning experience.
Building this survey in Qualtrics required a great deal of iteration to ensure the logic of piping responses into later questions worked properly. We wanted the rich insights, but we also needed the survey to be as low-effort as possible for the participants, while still maintaining rigor.
As for analysis, while the Likert scale questions were straightforward, I witnessed in real-time what can happen when participants have an "Other" option. While it is necessary to ensure the data is accurate and they can speak to their experiences, it was at times comical seeing how often participants would select "Other" and fill in the blank with a response that was provided in the multiple choice options. Fortunately, it was easy to clean the data in Excel for analysis.
Broker Interviews
We tapped into our broker panel inviting around 100 individuals to schedule interviews after doing an initial analysis of the survey results. This helped us validate our findings, while directly digging into brokers' experiences of calling underwriters.
Brokers were asked questions outlining their entire experience of calling underwriters. We asked them about why they called, what guidance they were looking for, the responses they received, and more about how they feel about the process. At the end, we turned the questions towards AI focusing on if and how they currently use it, as well as their thoughts on having it as an alternative for them calling the underwriters.
One particular approach to the questions worth highlighting is the inclusion of questions meant to identify whether brokers' calling underwriters with questions is an experience unique to Rocket, or something they find themselves doing with other companies' underwriters as well. This was largely due to a finding from the underwriter survey that underwriters felt they were receiving calls about information already in the portal, with many stating they felt the brokers lack proper training to navigate Rocket Pro effectively.
With this in mind, we wanted to gather information about how Rocket Pro compares to other platforms to distinguish a few different factors:
Was the RocketPro portal uniquely confusing, resulting in brokers calling our underwriters more?
Are all broker portals confusing, meaning RocketPro is just among many of them?
Is having to call the underwriters just part of the job, no matter what company underwrites the loan?
Do other companies even allow them to call the underwriter as Rocket does?
Research Results
Where Automation Delivers the Most Value
We collected insights from 130 underwriters and 6 brokers, pinpointing the most frequent and low-effort inquiries made via phone. We found broker inquiries cluster around low-effort, high-frequency tasks, making them ideal candidates for automation.

Document Uploads & Reviews
Verifying file uploads
Clarification of loan conditions

Loan Status & Process Updates
Checking on clear to close status (CTC)
Requests to send closing documents
Questions about internal processes

Client Finances
Cross-checking client income calculations
Verifying client assets
Establishing a client's credit score
With these insights in hand, we no longer relied on the overarching topics of the broker inquiries. Instead, we could pinpoint what exactly they were calling about when we say they were calling about things like loan status, client finances, and document uploads.
These inquiries require minimal underwriting judgment but consume significant time, creating an ideal automation opportunity without compromising decision quality.
This allows us to:
• Confirm prioritization
• Align volume with effort
• Maximize value of AI integration and minimize risk of incorrect information causing distrust in users.
Reservations About Automation
We also inquired about general attitudes and concerns with incorporating automation into broker workflows, to help us anticipate potential pain points.
Underwriter & Broker Concerns
Frustrating brokers
Accuracy & trust
Loss of human touch
Accountability
Design Opportunities
Faster access to real-time info
Fewer disruptions to the work of underwriters & brokers
Integration with current info & data sources
Clear escalation paths
Beyond the Scope: Differing Perspectives of Phone Calls & Tension
We also found a common theme of both underwriters and brokers agreeing that some phone calls are necessary for underwriting. We also revealed an important point of conflict: brokers commonly cite their motivation for calling the underwriters as a result of frustration, whereas underwriters refer to the phone calls as a source of frustration for very different reasons:

This allows us to take a step back and think even more critically about why brokers feel the need to call underwriters. Depending on which perspective you prioritize, there are two paths forward worth exploring enhancements to in the Rocket Pro portal:
Communication and transparency
OR
Navigation and visibility of information
Nonetheless, this proved Rocket Pro Assist was in a valuable position to bridge the gaps for both of these perspectives, by offering help to brokers in navigating their portals and surfacing information without any additional effort.
Recommendations
After analyzing the gathered data, I made 4 recommendations to the product team working on the integration of Rocket Assist in Rocket Pro.
While these recommendations generally focused on the questions brokers ask underwriters about and their solutions, I also highlighted the importance of bringing the value of AI to our users, rather than expecting them to put in the additional effort to learn the value of AI. This could come in the form of insights on the dashboard that guide brokers based on their current pipelines, guidance based on past loans with similar circumstances, etc. There are also design choices that can happen with Rocket Pro Assist to make it more proactive in delivering value. This could include it immediately opening on log-in with updates and insights.
Nonetheless, what matters most is that we are delivering valuable information to them sooner, and making it clear it was done with Rocket Assist to build trust.
Outcomes
Rocket Pro Assist was officially released on October 1st, 2025
Several recommendations I made were implemented in the product, including Pathfinder integration, loan actions and statuses, and guideline assistance.
Further, I had several discussions with Rocket Pro product managers about the conversational AI experience for brokers as a whole.
During this time, I was also conducting research for Rocket Pro Navigate. It focused more on being a tool for mortgage brokers beyond their pipeline that could help them with general workflow tasks, as well as assist with guidelines. However, the experiences were similar, and even discussing them both as a team, we would get tripped up trying to distinguish the products. This prompted me to bring some points of discussion to the product manager and lead product designer of Rocket Pro Navigate in particular:
If brokers have a bad experience with Rocket Pro Navigate or Rocket Assist, are we confident that they won't carry their bad experience or opinion from one product to another?
Do we believe that the difference between the products would be more clear for them than it is for us?
Knowing about brokers' AI usage and AI perceptions, what can we do to clearly differentiate our conversational AI products from those they find on other websites or LLMs like ChatGPT?
This prompted a shift in strategy that has not been made public and I cannot share due to an NDA, but it helped re-contextualize how we saw these products as a part of the broker user experience.













