-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix test suite to properly handle async specs #109
Conversation
This test was previously failing, but async, outside of the test cycle. It was also broken, because it did not toggle open the general tab; so hovering over the actual `versionTag` field could never happen. Now we clearly model the user journey. Bonus points for getting rid of `clock`. We can simply use `waitFor` to wait for a DOM element expected for arrival.
Converted to draft as this needs further "action". |
All tests are now green. ✅ We (me) changed some fields in the properties panel from inputs to text areas and forgot to adjust in the tests. I also added some values to the fields in fixture not to compare against undefined -> 5873297 Rest of the changes are about some incorrect element ID or wrong bootstrapping of the modeler. |
@@ -116,10 +116,10 @@ describe('provider/dmn - IdProps', function() { | |||
it('should set invalid', inject(async function(elementRegistry, selection) { | |||
|
|||
// given | |||
const task = elementRegistry.get('Task_1'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤡
I've pushed 93a903a to not print the XML on CI/when all tests are run, but overall the PR looks good. Great job! |
Thanks @jarekdanielak for taking this on 💪 |
As a matter of fact the test suite to date does not correctly handle async specs like this, a prominent (and used) pattern:
I fixed this via c9ee643.
Of course this covers up a bunch (24 / 130) broken tests. One test (verifying version tag tooltip) was completely broken, used a non-existing utility, and was structured so it would never work. The test is now properly architected, and works, modeling a user flow:
I did not touch the other 24 failing tests, but I suggest we handle them ("make them work") in similar ways.