Gradient AI
Gradient AI Innovation & Technology Culture
Gradient AI Employee Perspectives
How does your team gather customer feedback throughout the product development process?
The product team at Gradient gathers feedback throughout the customer lifecycle. Product is brought into select pre-sales engagements when the sale relates to a new feature or product. We get involved with ongoing customer support alongside the customer service team to understand how customers are getting value out of our products and how they might benefit from improvements or changes. We engage in customer interviews and product-led quarterly business reviews to maintain the feedback cycle for all customers.
What is your main source for new product development ideas?
Customer feedback is the primary source of all product development at Gradient. Each new product or feature is initiated from feedback we hear from current or prospective customers. The cycle then begins with a voice-of-customer and market feedback exercise before any requirements are put in writing. This process ensures that we are building features that will benefit our customers and helps us be as efficient as possible with our team’s time.
What are some of the biggest lessons your team has learned when it comes to prioritizing customer input?
The biggest lesson we learned is to be sure to get feedback from as many potential users as possible before committing to new work. As a startup, it is tempting to try to prioritize requests from each customer in an attempt to make everyone happy. The reality is that the best approach to making everyone happy is to build products that everyone will love and that starts with getting feedback from as many customers as possible.

What’s your rule for fast, safe releases — and what KPI proves it works?
We do not target a single rule to guarantee fast and safe software releases. Instead, reliability comes from a combination of proven engineering practices that reduce risk while preserving velocity.
We prioritize techniques such as feature flagging, schema-safe database changes and blue-green deployments with segmented rollouts. Together, these approaches allow teams to ship continuously, limit blast radius and recover quickly when issues arise.
The effectiveness of this model is ultimately measured through outcomes rather than intent. Sustained system uptime, low customer-visible incident rates and the ability to roll back or disable changes without disruption are the clearest indicators that fast, safe releases are working as designed.
What standard or metric defines “quality” in your stack?
Quality in our stack is defined by predictable outcomes under change. Rather than treating quality as a single metric or a static bar, we evaluate it through a small set of operational signals that demonstrate the system can evolve safely and reliably.
At the platform level, sustained system uptime, low customer-impacting incident rates and fast mean time to recovery (MTTR) indicate that changes are well-designed and well-contained. At the delivery level, a low change failure rate and the ability to deploy frequently without service degradation show that quality is built into the pipeline, not inspected after the fact.
Finally, architectural standards, such as backward-compatible schema changes, feature-flagged rollouts and automated validation, act as guardrails that make these outcomes repeatable. In our view, quality is proven not by how rarely we change the system, but by how confidently and safely we can change it.
Name one AI/automation that shipped recently and its impact on your team or the business.
The utility of AI coding has reached an inflection point and allowed us to dramatically speed up many engineering activities.
We’ve been doing a lot of recent work to build out a new UI-heavy application. This application features multiple trends, charts, year-over-year views. This application is meant to enable one of Gradient’s client personas to gain and present insights to their clients on how to more profitably manage a book of insurance business. Thus, the ability to slice and dice the data using different categorical and date-driven filters and render the resulting data and trends in myriad ways, is critical.
AI coding assistants allowed us to take a mockup of the UI prepared by a designer and generate nearly production ready code in a little over a day of full-time engineering work. This sort of engineering output would have taken weeks prior to the advent of capable AI assistants. This has allowed us to spend less time turning the crank on writing code to mirror the desired UI and spend more time on product design, nailing down business logic, thinking through edge cases and the many other things that make or break the success of a product.
