Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add unsafety test for consumers of salsa macros #692

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

shanesveller
Copy link

After I updated a personal project to a Salsa commit that included #680, I started receiving compiler errors due to my project-local use of #![forbid(unsafe_code)] together with the salsa::tracked derive macro. I wanted to see if this was expected due to chosen trade-offs, or if it's a quality of the earlier versions of Salsa that might be preserved.

The test included in this PR fails as of current master and on the first commit of #680, but passes on that commit's parent. I mostly created this PR to be a place for discussion without polluting the comments of a merged PR, but I don't personally have (or need) a vote on what the project chooses to do with this information. I just thought it might save a tiny bit of legwork and contextualize the change that surprised me - I did not expect my own local crate to newly contain unsafe via the derive macros.

If it's desired to permanently incorporate this test to help catch future regressions, I'm happy to make any changes to this to line up with desired content, commit message style, etc. but I'm not personally capable of designing or implementing a no-unsafe version of the intent in #680.

If preferred, I'm happy to close this and migrate the concern to an issue instead.

cargo test --no-run --test downstream_unsafe:

   Compiling salsa v0.18.0 (/Users/shane/src/not_me/lsp/salsa)
error[E0453]: allow(unsafe_code) incompatible with previous forbid
  --> tests/downstream_unsafe.rs:20:58
   |
1  | #![forbid(unsafe_code)]
   |           ----------- `forbid` level set here
...
20 | fn final_result(db: &dyn LogDatabase, input: MyInput) -> u32 {
   |                                                          ^^^ overruled by previous forbid
                                                                                                                                                         
error: declaration of an `unsafe` function
  --> tests/downstream_unsafe.rs:20:58
   |
20 | fn final_result(db: &dyn LogDatabase, input: MyInput) -> u32 {
   |                                                          ^^^
   |
note: the lint level is defined here
  --> tests/downstream_unsafe.rs:1:11
   |
1  | #![forbid(unsafe_code)]
   |           ^^^^^^^^^^^
                                                                                                                                                         
error: usage of an `unsafe` block
  --> tests/downstream_unsafe.rs:20:58
   |
20 | fn final_result(db: &dyn LogDatabase, input: MyInput) -> u32 {
   |                                                          ^^^
                                                                                                                                                         
error[E0453]: allow(unsafe_code) incompatible with previous forbid
  --> tests/downstream_unsafe.rs:31:65
   |
1  | #![forbid(unsafe_code)]
   |           ----------- `forbid` level set here
...
31 | fn intermediate_result(db: &dyn LogDatabase, input: MyInput) -> MyTracked<'_> {
   |                                                                 ^^^^^^^^^ overruled by previous forbid
                                                                                                                                                         
error: declaration of an `unsafe` function
  --> tests/downstream_unsafe.rs:31:65
   |
31 | fn intermediate_result(db: &dyn LogDatabase, input: MyInput) -> MyTracked<'_> {
   |                                                                 ^^^^^^^^^
                                                                                                                                                         
error: usage of an `unsafe` block
  --> tests/downstream_unsafe.rs:31:65
   |
31 | fn intermediate_result(db: &dyn LogDatabase, input: MyInput) -> MyTracked<'_> {
   |                                                                 ^^^^^^^^^
                                                                                                                                                         
For more information about this error, try `rustc --explain E0453`.
error: could not compile `salsa` (test "downstream_unsafe") due to 6 previous errors

Copy link

netlify bot commented Feb 16, 2025

Deploy Preview for salsa-rs canceled.

Name Link
🔨 Latest commit 696efd7
🔍 Latest deploy log https://app.netlify.com/sites/salsa-rs/deploys/67b249a6834cea0008cc1838

@MichaReiser
Copy link
Contributor

MichaReiser commented Feb 17, 2025

Thanks. I like the idea of testing that the salsa macros don't emit any errors. i'd prefer if we could change our existing test to catch all clippy errors (including pedantic and restriction) over introducing a new test just for this. See here

https://github.com/MichaReiser/salsa/blob/c48a9df2ec58d816e0f602ee3fee3ef483b2f1d4/tests/warnings/main.rs#L3

It would also be great if we could fix the macros. Would you be interested in adding a fix in your PR?

Edit: #691 might have removed the unsafe usages but a test that catches all sort of clippy errors in the generated code would still be good.

@shanesveller
Copy link
Author

@MichaReiser Thanks for taking a look!

i'd prefer if we could change our existing test to catch all clippy errors (including pedantic and restriction) over introducing a new test just for this.

That sounds super reasonable to me, I'll take a look at whether I can at least come up with a failing test in that style. Here's what I would've imagined based on your phrasing, if you see any adjustments to make:

  • Any lints from lint groups deny-by-default, correctness, pedantic, restriction should fail such a test
  • Any lints from warn-by-default, perf or nursery are perhaps candidates for visible warning, but I acknowledge that this means no CI failure allows those through pretty easily
  • Any lints from style, suspicious or especially complexity alone should perhaps be exempted from this particular test approach?

My personal clippy configs are generally super pessimistic so I recognize that I may not be calibrated the same as any of the maintainers for what actually constitutes actionable problems.

It would also be great if we could fix the macros. Would you be interested in adding a fix in your PR?

Not sure if this is what you're asking, but I tried to mention explicitly in the original PR description that I don't feel skill-capable of providing such a fix myself. I've made no significant work with unsafe in any personal or professional projects thus far, and minimal work on authoring procedural macros as well. If I had a good idea of how to implement a fix I probably would've tried to start there and fall back to an alignment discussion if needed. Proc macros at least are something I want to gain some practice in.

There was also always the possibility that you all feel that judicious use of unsafe was warranted and it was my expectations that needed adjusting, so I thought better to ask first.

@shanesveller shanesveller marked this pull request as draft February 17, 2025 22:08
@MichaReiser
Copy link
Contributor

I don't think the goal is to forbid the use of any unsafe code in salsa or its macros. Instead, we should mark the code as "generated" and allow specific lint rules for code that violates them (and where it isn't possible to fix, e.g. if it is needed to use unsafe code).

Having said that. A failing test for all clippy and standard lints would be a great starting point!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants