Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump v1.129.0 #268

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Bump v1.129.0 #268

wants to merge 1 commit into from

Conversation

github-actions[bot]
Copy link

@github-actions github-actions bot commented Mar 6, 2025

This PR bumps the version from v1.128.0 to v1.129.0.
Please review the changes and merge this PR if everything looks good.

Upstream release notes

Monitored upstream files

diff --git a/server/src/migrations/1740595460866-UsersAuditUuidv7PrimaryKey.ts b/server/src/migrations/1740595460866-UsersAuditUuidv7PrimaryKey.ts
index 997f718fd..59fc4dbd5 100644
--- a/server/src/migrations/1740595460866-UsersAuditUuidv7PrimaryKey.ts
+++ b/server/src/migrations/1740595460866-UsersAuditUuidv7PrimaryKey.ts
@@ -4,7 +4,7 @@ export class UsersAuditUuidv7PrimaryKey1740595460866 implements MigrationInterfa
     name = 'UsersAuditUuidv7PrimaryKey1740595460866'
 
     public async up(queryRunner: QueryRunner): Promise<void> {
-        await queryRunner.query(`DROP INDEX "public"."IDX_users_audit_deleted_at_asc_user_id_asc"`);
+        await queryRunner.query(`DROP INDEX "IDX_users_audit_deleted_at_asc_user_id_asc"`);
         await queryRunner.query(`ALTER TABLE "users_audit" DROP CONSTRAINT "PK_e9b2bdfd90e7eb5961091175180"`);
         await queryRunner.query(`ALTER TABLE "users_audit" DROP COLUMN "id"`);
         await queryRunner.query(`ALTER TABLE "users_audit" ADD "id" uuid NOT NULL DEFAULT immich_uuid_v7()`);
@@ -14,7 +14,7 @@ export class UsersAuditUuidv7PrimaryKey1740595460866 implements MigrationInterfa
     }
 
     public async down(queryRunner: QueryRunner): Promise<void> {
-        await queryRunner.query(`DROP INDEX "public"."IDX_users_audit_deleted_at"`);
+        await queryRunner.query(`DROP INDEX "IDX_users_audit_deleted_at"`);
         await queryRunner.query(`ALTER TABLE "users_audit" DROP CONSTRAINT "PK_e9b2bdfd90e7eb5961091175180"`);
         await queryRunner.query(`ALTER TABLE "users_audit" DROP COLUMN "id"`);
         await queryRunner.query(`ALTER TABLE "users_audit" ADD "id" SERIAL NOT NULL`);
diff --git a/server/src/migrations/1740739778549-CreatePartnersAuditTable.ts b/server/src/migrations/1740739778549-CreatePartnersAuditTable.ts
new file mode 100644
index 000000000..d9c9dc194
--- /dev/null
+++ b/server/src/migrations/1740739778549-CreatePartnersAuditTable.ts
@@ -0,0 +1,38 @@
+import { MigrationInterface, QueryRunner } from "typeorm";
+
+export class CreatePartnersAuditTable1740739778549 implements MigrationInterface {
+    name = 'CreatePartnersAuditTable1740739778549'
+
+    public async up(queryRunner: QueryRunner): Promise<void> {
+        await queryRunner.query(`CREATE TABLE "partners_audit" ("id" uuid NOT NULL DEFAULT immich_uuid_v7(), "sharedById" uuid NOT NULL, "sharedWithId" uuid NOT NULL, "deletedAt" TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT clock_timestamp(), CONSTRAINT "PK_952b50217ff78198a7e380f0359" PRIMARY KEY ("id"))`);
+        await queryRunner.query(`CREATE INDEX "IDX_partners_audit_shared_by_id" ON "partners_audit" ("sharedById") `);
+        await queryRunner.query(`CREATE INDEX "IDX_partners_audit_shared_with_id" ON "partners_audit" ("sharedWithId") `);
+        await queryRunner.query(`CREATE INDEX "IDX_partners_audit_deleted_at" ON "partners_audit" ("deletedAt") `);
+        await queryRunner.query(`CREATE OR REPLACE FUNCTION partners_delete_audit() RETURNS TRIGGER AS
+              $$
+               BEGIN
+                INSERT INTO partners_audit ("sharedById", "sharedWithId")
+                SELECT "sharedById", "sharedWithId"
+                FROM OLD;
+                RETURN NULL;
+               END;
+              $$ LANGUAGE plpgsql`
+        );
+        await queryRunner.query(`CREATE OR REPLACE TRIGGER partners_delete_audit
+               AFTER DELETE ON partners
+               REFERENCING OLD TABLE AS OLD
+               FOR EACH STATEMENT
+               EXECUTE FUNCTION partners_delete_audit();
+            `);
+    }
+
+    public async down(queryRunner: QueryRunner): Promise<void> {
+        await queryRunner.query(`DROP INDEX "public"."IDX_partners_audit_deleted_at"`);
+        await queryRunner.query(`DROP INDEX "public"."IDX_partners_audit_shared_with_id"`);
+        await queryRunner.query(`DROP INDEX "public"."IDX_partners_audit_shared_by_id"`);
+        await queryRunner.query(`DROP TRIGGER partners_delete_audit`);
+        await queryRunner.query(`DROP FUNCTION partners_delete_audit`);
+        await queryRunner.query(`DROP TABLE "partners_audit"`);
+    }
+
+}
diff --git a/server/src/migrations/1741027685381-ResetMemories.ts b/server/src/migrations/1741027685381-ResetMemories.ts
new file mode 100644
index 000000000..6a8037221
--- /dev/null
+++ b/server/src/migrations/1741027685381-ResetMemories.ts
@@ -0,0 +1,14 @@
+import { MigrationInterface, QueryRunner } from 'typeorm';
+
+export class ResetMemories1741027685381 implements MigrationInterface {
+  name = 'ResetMemories1741027685381';
+
+  public async up(queryRunner: QueryRunner): Promise<void> {
+    await queryRunner.query(`DELETE FROM "memories"`);
+    await queryRunner.query(`DELETE FROM "system_metadata" WHERE "key" = 'memories-state'`);
+  }
+
+  public async down(): Promise<void> {
+    // nothing to do
+  }
+}
diff --git a/server/src/migrations/1741179334403-MoveHistoryUuidEntityId.ts b/server/src/migrations/1741179334403-MoveHistoryUuidEntityId.ts
new file mode 100644
index 000000000..449272341
--- /dev/null
+++ b/server/src/migrations/1741179334403-MoveHistoryUuidEntityId.ts
@@ -0,0 +1,26 @@
+import { MigrationInterface, QueryRunner } from 'typeorm';
+
+export class MoveHistoryUuidEntityId1741179334403 implements MigrationInterface {
+  name = 'MoveHistoryUuidEntityId1741179334403';
+
+  public async up(queryRunner: QueryRunner): Promise<void> {
+    await queryRunner.query(`ALTER TABLE "move_history" ALTER COLUMN "entityId" TYPE uuid USING "entityId"::uuid;`);
+    await queryRunner.query(`delete from "move_history"
+      where
+        "move_history"."entityId" not in (
+          select
+            "id"
+          from
+            "assets"
+          where
+            "assets"."id" = "move_history"."entityId"
+        )
+        and "move_history"."pathType" = 'original'
+  `)
+  }
+
+  public async down(queryRunner: QueryRunner): Promise<void> {
+    await queryRunner.query(`ALTER TABLE "move_history" ALTER COLUMN "entityId" TYPE character varying`);
+  }
+}
+
diff --git a/server/Dockerfile b/server/Dockerfile
index 532c39c42..f46ea8d0e 100644
--- a/server/Dockerfile
+++ b/server/Dockerfile
@@ -1,5 +1,5 @@
 # dev build
-FROM ghcr.io/immich-app/base-server-dev:20250218@sha256:04df131dafca34538685453e4a00387ffe14288edff43cc68cf44feb76c8f4c0 AS dev
+FROM ghcr.io/immich-app/base-server-dev:20250304@sha256:bc8d0c8d5c6d00625d01c84785435383651a9d57cb6cd1b1430cf0bcb58e4a80 AS dev
 
 RUN apt-get install --no-install-recommends -yqq tini
 WORKDIR /usr/src/app
@@ -25,7 +25,7 @@ COPY --from=dev /usr/src/app/node_modules/@img ./node_modules/@img
 COPY --from=dev /usr/src/app/node_modules/exiftool-vendored.pl ./node_modules/exiftool-vendored.pl
 
 # web build
-FROM node:22.13.1-alpine3.20@sha256:c52e20859a92b3eccbd3a36c5e1a90adc20617d8d421d65e8a622e87b5dac963 AS web
+FROM node:22.14.0-alpine3.20@sha256:40be979442621049f40b1d51a26b55e281246b5de4e5f51a18da7beb6e17e3f9 AS web
 
 WORKDIR /usr/src/open-api/typescript-sdk
 COPY open-api/typescript-sdk/package*.json open-api/typescript-sdk/tsconfig*.json ./
diff --git a/web/Dockerfile b/web/Dockerfile
index fc2a9e88c..8c2e67e62 100644
--- a/web/Dockerfile
+++ b/web/Dockerfile
@@ -1,4 +1,4 @@
-FROM node:22.13.1-alpine3.20@sha256:c52e20859a92b3eccbd3a36c5e1a90adc20617d8d421d65e8a622e87b5dac963
+FROM node:22.14.0-alpine3.20@sha256:40be979442621049f40b1d51a26b55e281246b5de4e5f51a18da7beb6e17e3f9
 
 RUN apk add --no-cache tini
 USER node
diff --git a/machine-learning/Dockerfile b/machine-learning/Dockerfile
index d88873114..8761586de 100644
--- a/machine-learning/Dockerfile
+++ b/machine-learning/Dockerfile
@@ -1,6 +1,6 @@
 ARG DEVICE=cpu
 
-FROM python:3.11-bookworm@sha256:14b4620f59a90f163dfa6bd252b68743f9a41d494a9fde935f9d7669d98094bb AS builder-cpu
+FROM python:3.11-bookworm@sha256:68a8863d0625f42d47e0684f33ca02f19d6094ef859a8af237aaf645195ed477 AS builder-cpu
 
 FROM builder-cpu AS builder-openvino
 
@@ -34,7 +34,7 @@ RUN python3 -m venv /opt/venv
 COPY poetry.lock pyproject.toml ./
 RUN poetry install --sync --no-interaction --no-ansi --no-root --with ${DEVICE} --without dev
 
-FROM python:3.11-slim-bookworm@sha256:42420f737ba91d509fc60d5ed65ed0492678a90c561e1fa08786ae8ba8b52eda AS prod-cpu
+FROM python:3.11-slim-bookworm@sha256:614c8691ab74150465ec9123378cd4dde7a6e57be9e558c3108df40664667a4c AS prod-cpu
 
 FROM prod-cpu AS prod-openvino
 
diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
index 08437e17c..fd0edf9cb 100644
--- a/docker/docker-compose.yml
+++ b/docker/docker-compose.yml
@@ -56,7 +56,7 @@ services:
 
   database:
     container_name: immich_postgres
-    image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
+    image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:739cdd626151ff1f796dc95a6591b55a714f341c737e27f045019ceabf8e8c52
     environment:
       POSTGRES_PASSWORD: ${DB_PASSWORD}
       POSTGRES_USER: ${DB_USERNAME}
diff --git a/docs/docs/install/environment-variables.md b/docs/docs/install/environment-variables.md
index 16f05b633..8b9f74d45 100644
--- a/docs/docs/install/environment-variables.md
+++ b/docs/docs/install/environment-variables.md
@@ -11,7 +11,7 @@ Just restarting the containers does not replace the environment within the conta
 
 In order to recreate the container using docker compose, run `docker compose up -d`.
 In most cases docker will recognize that the `.env` file has changed and recreate the affected containers.
-If this should not work, try running `docker compose up -d --force-recreate`.
+If this does not work, try running `docker compose up -d --force-recreate`.
 
 :::
 
@@ -20,8 +20,8 @@ If this should not work, try running `docker compose up -d --force-recreate`.
 | Variable           | Description                     |  Default  | Containers               |
 | :----------------- | :------------------------------ | :-------: | :----------------------- |
 | `IMMICH_VERSION`   | Image tags                      | `release` | server, machine learning |
-| `UPLOAD_LOCATION`  | Host Path for uploads           |           | server                   |
-| `DB_DATA_LOCATION` | Host Path for Postgres database |           | database                 |
+| `UPLOAD_LOCATION`  | Host path for uploads           |           | server                   |
+| `DB_DATA_LOCATION` | Host path for Postgres database |           | database                 |
 
 :::tip
 These environment variables are used by the `docker-compose.yml` file and do **NOT** affect the containers directly.
@@ -33,15 +33,15 @@ These environment variables are used by the `docker-compose.yml` file and do **N
 | :---------------------------------- | :---------------------------------------------------------------------------------------- | :--------------------------: | :----------------------- | :----------------- |
 | `TZ`                                | Timezone                                                                                  |        <sup>\*1</sup>        | server                   | microservices      |
 | `IMMICH_ENV`                        | Environment (production, development)                                                     |         `production`         | server, machine learning | api, microservices |
-| `IMMICH_LOG_LEVEL`                  | Log Level (verbose, debug, log, warn, error)                                              |            `log`             | server, machine learning | api, microservices |
-| `IMMICH_MEDIA_LOCATION`             | Media Location inside the container ⚠️**You probably shouldn't set this**<sup>\*2</sup>⚠️ |   `./upload`<sup>\*3</sup>   | server                   | api, microservices |
+| `IMMICH_LOG_LEVEL`                  | Log level (verbose, debug, log, warn, error)                                              |            `log`             | server, machine learning | api, microservices |
+| `IMMICH_MEDIA_LOCATION`             | Media location inside the container ⚠️**You probably shouldn't set this**<sup>\*2</sup>⚠️ |   `./upload`<sup>\*3</sup>   | server                   | api, microservices |
 | `IMMICH_CONFIG_FILE`                | Path to config file                                                                       |                              | server                   | api, microservices |
 | `NO_COLOR`                          | Set to `true` to disable color-coded log output                                           |           `false`            | server, machine learning |                    |
-| `CPU_CORES`                         | Amount of cores available to the immich server                                            | auto-detected cpu core count | server                   |                    |
+| `CPU_CORES`                         | Number of cores available to the Immich server                                            | auto-detected CPU core count | server                   |                    |
 | `IMMICH_API_METRICS_PORT`           | Port for the OTEL metrics                                                                 |            `8081`            | server                   | api                |
 | `IMMICH_MICROSERVICES_METRICS_PORT` | Port for the OTEL metrics                                                                 |            `8082`            | server                   | microservices      |
 | `IMMICH_PROCESS_INVALID_IMAGES`     | When `true`, generate thumbnails for invalid images                                       |                              | server                   | microservices      |
-| `IMMICH_TRUSTED_PROXIES`            | List of comma separated IPs set as trusted proxies                                        |                              | server                   | api                |
+| `IMMICH_TRUSTED_PROXIES`            | List of comma-separated IPs set as trusted proxies                                        |                              | server                   | api                |
 | `IMMICH_IGNORE_MOUNT_CHECK_ERRORS`  | See [System Integrity](/docs/administration/system-integrity)                             |                              | server                   | api, microservices |
 
 \*1: `TZ` should be set to a `TZ identifier` from [this list][tz-list]. For example, `TZ="Etc/UTC"`.
@@ -50,7 +50,7 @@ These environment variables are used by the `docker-compose.yml` file and do **N
 \*2: This path is where the Immich code looks for the files, which is internal to the docker container. Setting it to a path on your host will certainly break things, you should use the `UPLOAD_LOCATION` variable instead.
 
 \*3: With the default `WORKDIR` of `/usr/src/app`, this path will resolve to `/usr/src/app/upload`.
-It only need to be set if the Immich deployment method is changing.
+It only needs to be set if the Immich deployment method is changing.
 
 ## Workers
 
@@ -75,12 +75,12 @@ Information on the current workers can be found [here](/docs/administration/jobs
 | Variable                            | Description                                                              |   Default    | Containers                     |
 | :---------------------------------- | :----------------------------------------------------------------------- | :----------: | :----------------------------- |
 | `DB_URL`                            | Database URL                                                             |              | server                         |
-| `DB_HOSTNAME`                       | Database Host                                                            |  `database`  | server                         |
-| `DB_PORT`                           | Database Port                                                            |    `5432`    | server                         |
-| `DB_USERNAME`                       | Database User                                                            |  `postgres`  | server, database<sup>\*1</sup> |
-| `DB_PASSWORD`                       | Database Password                                                        |  `postgres`  | server, database<sup>\*1</sup> |
-| `DB_DATABASE_NAME`                  | Database Name                                                            |   `immich`   | server, database<sup>\*1</sup> |
-| `DB_VECTOR_EXTENSION`<sup>\*2</sup> | Database Vector Extension (one of [`pgvector`, `pgvecto.rs`])            | `pgvecto.rs` | server                         |
+| `DB_HOSTNAME`                       | Database host                                                            |  `database`  | server                         |
+| `DB_PORT`                           | Database port                                                            |    `5432`    | server                         |
+| `DB_USERNAME`                       | Database user                                                            |  `postgres`  | server, database<sup>\*1</sup> |
+| `DB_PASSWORD`                       | Database password                                                        |  `postgres`  | server, database<sup>\*1</sup> |
+| `DB_DATABASE_NAME`                  | Database name                                                            |   `immich`   | server, database<sup>\*1</sup> |
+| `DB_VECTOR_EXTENSION`<sup>\*2</sup> | Database vector extension (one of [`pgvector`, `pgvecto.rs`])            | `pgvecto.rs` | server                         |
 | `DB_SKIP_MIGRATIONS`                | Whether to skip running migrations on startup (one of [`true`, `false`]) |   `false`    | server                         |
 
 \*1: The values of `DB_USERNAME`, `DB_PASSWORD`, and `DB_DATABASE_NAME` are passed to the Postgres container as the variables `POSTGRES_USER`, `POSTGRES_PASSWORD`, and `POSTGRES_DB` in `docker-compose.yml`.
@@ -103,18 +103,18 @@ When `DB_URL` is defined, the `DB_HOSTNAME`, `DB_PORT`, `DB_USERNAME`, `DB_PASSW
 | Variable         | Description    | Default | Containers |
 | :--------------- | :------------- | :-----: | :--------- |
 | `REDIS_URL`      | Redis URL      |         | server     |
-| `REDIS_SOCKET`   | Redis Socket   |         | server     |
-| `REDIS_HOSTNAME` | Redis Host     | `redis` | server     |
-| `REDIS_PORT`     | Redis Port     | `6379`  | server     |
-| `REDIS_USERNAME` | Redis Username |         | server     |
-| `REDIS_PASSWORD` | Redis Password |         | server     |
-| `REDIS_DBINDEX`  | Redis DB Index |   `0`   | server     |
+| `REDIS_SOCKET`   | Redis socket   |         | server     |
+| `REDIS_HOSTNAME` | Redis host     | `redis` | server     |
+| `REDIS_PORT`     | Redis port     | `6379`  | server     |
+| `REDIS_USERNAME` | Redis username |         | server     |
+| `REDIS_PASSWORD` | Redis password |         | server     |
+| `REDIS_DBINDEX`  | Redis DB index |   `0`   | server     |
 
 :::info
 All `REDIS_` variables must be provided to all Immich workers, including `api` and `microservices`.
 
 `REDIS_URL` must start with `ioredis://` and then include a `base64` encoded JSON string for the configuration.
-More info can be found in the upstream [ioredis] documentation.
+More information can be found in the upstream [ioredis] documentation.
 
 When `REDIS_URL` or `REDIS_SOCKET` are defined, the `REDIS_HOSTNAME`, `REDIS_PORT`, `REDIS_USERNAME`, `REDIS_PASSWORD`, and `REDIS_DBINDEX` variables are ignored.
 :::
@@ -181,7 +181,11 @@ Redis (Sentinel) URL example JSON before encoding:
 
 :::info
 
-Other machine learning parameters can be tuned from the admin UI.
+While the `textual` model is the only one required for smart search, some users may experience slow first searches
+due to backups triggering loading of the other models into memory, which blocks other requests until completed.
+To avoid this, you can preload the other models (`visual`, `recognition`, and `detection`) if you have enough RAM to do so.
+
+Additional machine learning parameters can be tuned from the admin UI.
 
 :::
 
@@ -212,7 +216,7 @@ the `_FILE` variable should be set to the path of a file containing the variable
 details on how to use Docker Secrets in the Postgres image.
 
 \*2: See [this comment][docker-secrets-example] for an example of how
-to use use a Docker secret for the password in the Redis container.
+to use a Docker secret for the password in the Redis container.
 
 [tz-list]: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List
 [docker-secrets-example]: https://github.com/docker-library/redis/issues/46#issuecomment-335326234

/home/runner/work/immich-distribution/immich-distribution
/tmp/tmp.qS4y1m4My7 removed

Base image

Check the base images for recent relevant changes:

Checklist

  • Review the changes above
  • Possible write a news entry (and push it to this PR)
  • Wait for the CI to finish
  • Merge the PR

ref #267

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant