If you’ve been running Dynamics 365 tests with RSAT, you’ve probably already heard the news. At a recent Microsoft TechTalk, one of the highest-attended in the event’s history, the message was clear: RSAT is feature complete. No new capabilities. No roadmap. It still works, but it’s not going anywhere new.
For many D365 teams, this wasn’t exactly a shock. RSAT has always had a ceiling. But ‘feature complete’ from Microsoft is as close to an official end-of-life signal as the platform is likely to get, and it’s forced a lot of QA leads and ERP managers to have a conversation they’d been putting off: what comes next?
This post covers the landscape honestly, the three categories of RSAT alternatives teams are actually evaluating in 2026, what each one offers, and the three questions that will tell you quickly which approach is right for your environment.
Why RSAT Breaks the Way It Does
Before evaluating alternatives, it helps to understand the root cause of RSAT’s limitations, because several alternatives share the same underlying problem.
RSAT is a task recording tool. It captures the exact sequence of UI interactions a human makes in D365, then replays that sequence during test runs. It doesn’t understand what the process is trying to accomplish. It knows which button was clicked and in what order.
This works fine until the UI changes, which happens with every Microsoft release wave. When Wave 1 drops in April, or Wave 2 in October, RSAT recordings that were green last month start failing. Buttons get renamed. Forms get reorganized. A new required field appears in the middle of a workflow. Each of these changes requires someone to re-record the affected tests.
Multiply that by 150–300 UI changes in a typical wave release, across a test suite of any real size, and you have the regression sprint that consumes 2–3 developer-weeks twice a year. That’s the RSAT tax.
| “The fundamental problem isn’t that RSAT is old. It’s that every alternative that records UI interactions has the same ceiling, just with a better interface over it.” |
The Three Categories of RSAT Alternatives
Teams evaluating RSAT alternatives in 2026 are mostly choosing between three architectural categories. Here’s an honest look at each:
| Category | What it offers | Main trade-off |
| Low-code platforms (e.g. Leapwork, ACCELQ) | Visual test builders, no-code creation, RSAT import tools | Still UI-bound, breaks on layout changes; no cross-module E2E |
| Coded frameworks (Selenium + C#, SpecFlow) | Full flexibility, CI/CD integration, complex scenario handling | Slow to build; D365 dynamic IDs cause constant maintenance |
| AI test agents (e.g. Sofy) | Process-level understanding, self-healing, outcome validation | Higher initial investment; still emerging as a category |
Low-Code Platforms
Tools like Leapwork and ACCELQ offer visual, no-code interfaces that make test creation more accessible than RSAT and can import existing RSAT recordings to smooth the migration. These are meaningful improvements. The trade-off is that most still operate at the UI layer, meaning they break on the same types of changes that break RSAT, just with lower re-recording effort. For teams that want something accessible without a full architectural rethink, low-code is a reasonable step up.
Coded Frameworks
Selenium with C# or Java, SpecFlow for BDD-style tests, or custom WebDriver-based frameworks give developers complete control. Complex cross-module scenarios, CI/CD pipeline integration, conditional test logic, all possible. The cost is time: building and maintaining a coded D365 test suite is slow, and D365’s dynamic element identifiers (which change between sessions and across updates) make maintenance a constant undertaking. This is the right path for teams with dedicated automation engineers and genuine long-term investment capacity.
AI Test Agents
The newest category, and the most architecturally distinct. AI agents don’t record UI interactions, they understand the business process behind the test. Instead of looking for a specific button in a specific location, an agent knows that the goal is to validate a vendor payment posting and navigates whatever UI is present to reach and validate that outcome.
The practical difference: when a Wave release renames a field or reorganizes a form, the agent detects the change and adapts automatically. No re-recording. No developer intervention. The test produces an assertion log showing whether the financial outcome was correct, not just whether a button was clicked.
Three Questions That Tell You Which Category You’re Buying
Vendor demos make everything look like Level 3. Here’s how to cut through in an evaluation:
- Ask what happens when a button is permanently renamed, not temporarily missing.
If the answer involves retrying or waiting, that’s Level 1 selector retry, not self-healing. A genuine AI agent will tell you the element was renamed and explain how it adapted.
- Ask if it can test across modules in a single workflow.
RSAT and most alternatives test one module at a time. A test that follows a transaction from procurement through to the GL entry, validating the financial outcome at both ends, requires process-level understanding.
- Ask to see a self-healing event log from a real Wave release.
A real adaptive system produces a log: what changed, how the agent re-routed, and what the test validated. A tool that retries selectors produces a pass/fail log. The difference is visible immediately.
| Quick evaluation checklist:
• What layer does healing operate at, DOM selectors, element matching, or process logic? • Can it validate a cross-module workflow end-to-end in a single test run? • Can you see a healing event log from an actual release wave scenario? |
The Honest Transition Advice
There’s no perfect RSAT replacement that costs nothing, requires no change management, and covers everything from day one. The practical path most teams take is incremental: identify which RSAT scripts are regression tests versus documentation, start with the highest-risk cross-module workflows (Finance period close, procurement-to-pay, order-to-cash), and build automation coverage that validates outcomes rather than replays interactions.
The teams that get stuck in the transition are usually trying to 1:1 replace their RSAT recordings with a different tool that works the same way. The ones that move fastest are the ones that use the migration as an opportunity to build test coverage that was never possible with RSAT, cross-module, outcome-validated, self-healing.
Microsoft declaring RSAT feature complete isn’t bad news for D365 testing. It’s an invitation to build something better.
| Exploring RSAT alternatives for your D365 environment?
Sofy’s D365 test agents, covering Finance, Supply Chain, Sales, and Business Central, are built specifically for the release wave testing problem RSAT was never designed to solve. See how they work → |