Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run exercise checks for parsing errors #84

Open
mitchelloharawild opened this issue Jan 27, 2025 · 1 comment
Open

Run exercise checks for parsing errors #84

mitchelloharawild opened this issue Jan 27, 2025 · 1 comment
Labels
enhancement New feature or request

Comments

@mitchelloharawild
Copy link
Collaborator

I'm currently writing some quarto-live exercises on syntax errors, and would like to evaluate the presence of specific syntax errors in my exercise checking code.

Example usage if helpful:

```{webr}
#| exercise: syntax
______
```

```{webr}
#| exercise: syntax
#| check: true
library(qlcheckr)
apply_checks(
  "Try to calculate `3 */ 5`, what happens?" = grepl("*/", ql_code()),
  "The code you wrote produce a syntax error, have you used */ as shown?" = exists_in(ql_errors(), grepl, pattern = "unexpected"),
  .msg_correct = "That's correct!<br>R returns an error because it didn't expect a division (/) to occur <b>immediately</b> after multiplication (*)."
)
```

I've found this is marked as TODO in the code, but I raise an issue for it to highlight my need for this feature.

// TODO: run user provided `error_check`
return await new shelter.RList({
message: await shelter.evalR(`htmltools::HTML("
It looks like this might not be valid R code.
R cannot determine how to turn your text into a complete command.
You may have forgotten to fill in a blank,
to remove an underscore, to include a comma between arguments,
or to close an opening <code>&quot;</code>, <code>'</code>, <code>(</code>
or <code>{</code> with a matching <code>&quot;</code>, <code>'</code>,
<code>)</code> or <code>}</code>.
")`),
correct: false,
location: "append",
type: "error",
})

I'm not sure what you have in mind for a user provided error_check, but if this only applies to parsing errors perhaps it could share the same check chunk as normal exercise grading (either as a separate check environment variable, or merged into .evaluate_result as an error).

@georgestagg georgestagg added the enhancement New feature or request label Jan 28, 2025
@georgestagg
Copy link
Member

Thanks for highlighting, we definitely need to return to this to fill in the gap.

IIRC the idea for user provided error_check (and code_check, line 39 of that file) was to include functionality similar to learnr's *-code-check and *-error-check chunks. With that there would be additional optional chunk types that would be run if there is a parse error, and/or before evaluating user code.

However, I'm open to consider other schemes. As you suggest, for the moment it might be simpler to just invoke the code in the standard check chunk with a slightly different environment. There should already be a .stage variable with the value of "check" in the environment during the usual grading step. We could possibly invoke the same chunk if there is a parsing error, but with .stage set to error_check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants