Rocket Pro Assist: Automating Broker-Underwriter Workflows

Rocket Mortgage | UX Research Intern

May 2025 - August 2025 • 8 week timeline • UX Research Lead
Problem

Mortgage brokers often call underwriters for guidance, disrupting both teams' workflows. The product team wanted to explore how a newly implemented chatbot for broker support tickets, Rocket Assist, could reduce these calls and improve broker self-service in the portal.

Goals

We needed to understand which broker-underwriter calls represented a real self-service opportunity: questions common enough to train on, simple enough not to need an underwriter, and worth prioritizing for release. We also needed to understand brokers' relationship with AI: what would it take for them to actually trust and use it?

My Role

I led mixed-methods research that informed shipped features reaching 25,000+ mortgage brokers and prompted a strategic shift in Rocket Pro's approach to AI chat integration.

Tools:

Qualtrics
Excel
Figma
dScout
Azure DevOps

Methods:

Surveys
User Interviews
Contextual Inquiry
Literature Review

Rocket Pro Assist In Action
Project Impact

Uncovered specific questions that, when delegated to the chatbot, saved brokers and underwriters 2-5 minutes per typical phone call made to the other team.

Promoted a flexible approach to research for Rocket Pro by recruiting study participants beyond just brokers and promoting UX-CX teamwork, leading to a new collaborative structure.

Delivered recommendations shipped in the final release, such as training it on Pathfinder data for general guideline questions, clarification on loan statuses and conditions for brokers' current pipelines, and standardizing processes.

Combined with my other LLM research, helped influence a broader strategy shift for chat-based AI products within Rocket Pro.

If you're interested in a deeper dive into this project, keep scrolling!
Otherwise, view my next case study, or jump back to the Home page

Research Deep Dive

Why This Research?

In May of 2025, the Rocket Pro product team implemented Rocket Assist, a consumer-facing chatbot, into the Rocket Pro broker platform. It drastically improved the broker user experience by allowing them to submit support tickets without opening a separate application. This prompted the product team to ask how else the chatbot could continue to improve the broker experience and boost efficiencies.

My research focused on one issue in the Rocket Pro space that delayed loan processing and was prime for Rocket Assist to solve: brokers calling underwriters with questions while processing a loan.

1.6X

Work capacity increases per team if we reduce the average time underwriters spend on a call with brokers by 1 minute

1/3rd

Call volume reduction if we eliminate calls underwriters have to redirect to other teams, status updates, and document requests

7,283

Emails that coincide with a phone call per month would also be eliminated by an alternative means of question resolution

User Research Process and Plan

The product team was looking for early, low-hanging fruit opportunities to expand Assist's functionality.

I proposed focusing on broker-underwriter phone calls because past research had already quantified their cost to both teams. It was an evidence-backed starting point for expanding Assist's capabilities, which would naturally fit in with its conversational experience.

At this point, we had general topics associated with broker-underwriter phone calls. We lacked insights into the specifics of the calls, call frequency, and call duration, blocking our ability to prioritize what features and knowledge bases should be included in Assist.

I focused on identifying frequently asked, lower-effort questions that introduce the least room for error, reinforcing our long-term strategy of building trust with brokers in our AI-powered systems. Since this tool would be among our first AI experiences for mortgage brokers, their first impressions were critical. Starting small would enable us to troubleshoot and adapt the tool to user needs early.

Research Questions

After discussing with product managers what insights we lacked to push Rocket Assist forward in helping brokers before they decide to pick up the phone, I distilled our interest into 3 research questions:

1.

What questions do brokers reach out to underwriters for via phone call? 

2.

How do these questions rank in terms of frequency and effort required from underwriters to resolve? 

3.

How receptive are underwriters and brokers to automating the response to questions?

Our primary goal was to reduce time spent on phone calls when brokers needed guidance, but I also saw it as an opportunity to benchmark AI sentiment before the tool launched to compare with when assessing effectiveness later.

Gathering opinions on AI from both groups would also help us anticipate resistance to adoption with brokers as direct users, and underwriters as the source of truth the tool would be replacing.

Research Plan

  1. Desk Research

  1. Contextual Inquiry

  1. Survey with Underwriters (130 participants)

  1. Interviews with Brokers (6 participants)

I chose a mixed-methods approach to complement existing qualitative underwriter data with quantitative breadth, specifically to assess question frequency and topic distribution across the population. While they had to initially enter specific questions they received, these would be piped throughout the survey to maintain consistency. I embedded open-ended questions strategically to slow participants down and reduce recency bias, and to add context to AI sentiment ratings that a Likert scale alone wouldn't capture.

Since underwriters were the primary data source, broker interviews served as a validity check, ensuring the questions underwriters flagged as most common actually matched brokers' lived experience of what they call about. It also let me capture brokers' firsthand experience of making the calls, including their frustration and their own openness to AI as an alternative, which underwriters couldn't speak to.

Participants

The primary participants were 130 underwriters, each managing dozens of broker relationships, giving us population-level coverage we couldn't have achieved by surveying brokers directly. Six brokers served as secondary participants for additional context and validation.

Research Execution: Cross-team Collaboration

A major learning I took away from this project was the value of seeking out other team members to inform and progress the research.

Keeping this work visible in meetings and bringing it up with team members allowed me to constantly find new ways to push the project forward, enhance my approach, and bring it to a larger audience.

Research Results

I analyzed the gathered data for the survey using descriptive statistics in Qualtrics StatsiQ and Excel. For the interviews, I used Lucidchart to conduct a thematic analysis, coding responses, and then pulling in the survey data to call out areas where the data from brokers and underwriters converged and diverged.

Low-Hanging Fruit For Automation

We collected insights from 130 underwriters and 6 brokers, pinpointing the most frequent and low-effort questions made via phone. We found broker inquiries cluster around low-effort, high-frequency tasks, making them ideal candidates for automation.

Document Uploads & Reviews

  • Verifying file uploads

  • Clarification of loan conditions

Loan Status & Process Updates

  • Checking on clear to close status (CTC)

  • Requests to send closing documents

  • Questions about internal processes

Client Finances

  • Cross-checking client income calculations

  • Verifying client assets

  • Establishing a client's credit score

These insights allowed us to pinpoint what exactly brokers were calling about when we say they were calling about things like loan status, client finances, and document uploads.

These inquiries require minimal underwriting judgment but consume significant time, creating an ideal automation opportunity without compromising decision quality.

This allows us to:
• Confirm prioritization
• Align volume with effort
• Maximize value of AI integration and minimize risk of incorrect information causing distrust.

Reservations About Automation

We also inquired about general attitudes and concerns with incorporating automation into broker workflows, to help us anticipate potential pain points.

Underwriter & Broker Concerns

  • Frustrating brokers

  • Accuracy & trust

  • Loss of human touch

  • Accountability

Design Opportunities

  • Faster access to real-time info

  • Fewer disruptions to the work of underwriters & brokers

  • Integration with current info & data sources

  • Clear escalation paths

Beyond the Scope: Differing Perspectives of Phone Calls & Tension

We found agreement among underwriters and brokers that sometimes phone calls are a necessity, particularly if a loan scenario is unique.

Otherwise, the phone calls become frustrating. Brokers commonly cite their motivation for calling the underwriters as a result of frustration, whereas underwriters refer to the phone calls as a source of frustration:

The broker and underwriter perspective stem from the same root: the lack of visibility of loan statuses and unclear status information.

This allowed us to consider the approach to issues we want to solve using Rocket Pro Assist:

  1. Surfacing information within the chat via prompt or via immediate updates upon log in

  2. Rocket Assist clarifying statuses or condition notes, within the limits of its capabilities

This helped inform how we could approach the integration of broker assistance in Rocket Assist.

Being strategic about how we portray the chatbot's capabilities is critical. All 6 brokers interviewed could cite a previous negative experience with chatbots. They recalled times when chatbots were inaccurate, appeared deceptively human, and failed to direct users to someone who could help.

Some brokers responded to these questions with real emotion, ranging from frustration to genuine unease, at the idea of fully automating away underwriter communication. This mirrors concerns raised by underwriters in our survey, who worried that over-reliance on AI would frustrate brokers and complicate their own roles.

Recommendations

After analyzing the gathered data, I made 4 recommendations to the product team working on the integration of Rocket Assist in Rocket Pro.

While these recommendations generally focused on the questions brokers ask underwriters about and their solutions, I also highlighted the importance of bringing the value of AI to our users, rather than expecting them to put in the additional effort to learn the value of AI.

This could come in the form of providing prompts in the chat to guide brokers, insights on the dashboard based on their current pipelines, loan guidance based on past loans with similar circumstances, etc. There are also design choices that can happen with Rocket Pro Assist to make it more proactive in delivering value. This could include it immediately opening on log-in with updates and insights.

I gave smaller recommendations to help mitigate negative sentiment from past experiences bleeding into Rocket Assist.



  1. We must make it clear in the chat interface that brokers are interacting with an AI, not another person. This is more than just an issue of transparency, it's also a privacy issue as brokers should be well aware of the fact they are inputting data into an LLM, not just a chat with another person.



  1. We must be transparent when the broker is asking Rocket Assist to help with something beyond its capabilities. This needs to happen not only upfront when the broker first prompts the chat, but if in an ongoing chat the bot reaches a point where it is struggling to give a response. It should direct the brokers where they need to go for assistance, even if that means calling an underwriter.

Outcomes

Rocket Pro Assist was officially released on October 1st, 2025

Several recommendations I made were implemented in the product, including Pathfinder integration, loan actions and statuses, guideline assistance, and the disclaimer that the chat is AI-powered.

Further, I had several discussions with Rocket Pro product managers about the conversational AI experience for brokers as a whole.

I was also conducting research for Rocket Pro Navigate, an LLM tool similar to Rocket Assist. So much so that even as a team we would get tripped up trying to distinguish the products. This prompted me to bring some points of discussion to the product manager and lead product designer of Rocket Pro Navigate:

  • If brokers have a bad experience with Rocket Pro Navigate or Rocket Assist, are we confident that they won't carry their bad experience or opinion from one product to another?

  • Do we believe that the difference between the products would be more clear for them than it is for us?

  • Knowing about brokers' AI usage and AI perceptions, what can we do to clearly differentiate our conversational AI products from those they find on other websites or LLMs like ChatGPT?

This prompted a shift in strategy that has not been made public and I cannot share due to an NDA, but it helped re-contextualize how we saw these products as a part of the broker user experience.

Reflection

What I Didn't Expect

Collaborating across departments to distribute my survey turned into one of the most valuable parts of the entire process. Conversations with people outside my immediate team consistently surfaced ideas and methods I wouldn't have considered on my own. Presenting the survey to a research team of 30 before deployment was a turning point, and their feedback sharpened my analysis and fundamentally changed how I think about survey design. It also changed how I ask for feedback. I learned to come prepared with targeted questions and potential solutions rather than opening the floor generically.

What I'm Taking Forward

Two practices from this project have stuck with me. First, keeping stakeholders looped in throughout the research, not just at the final share-out. This generated genuine curiosity and richer discussion by the time I presented findings. Second, designing presentations for the least detail-oriented person in the room, with an appendix ready for everyone else. Leading with impact and pulling up supporting data as questions arose kept the room engaged from start to finish.

What I'd Do Differently

My screener happened to yield an even split of brokers who frequently call underwriters and those who rarely do, a useful distribution, but one I arrived at by luck. Going forward, I'd build that intentionally into my screener design. I'd also add a question to the underwriter survey probing whether the brokers who call most are consistently the same people. Understanding that pattern could have helped us prioritize AI enhancements more strategically by targeting heavy callers first, then working toward brokers who engage less.


Get in Touch!

ashless333@gmail.com

+1 812-989-6276

Contact Info

Email:

Phone: