chore(deps): update container image docker.io/localai/localai to v2.19.2 by renovate (#24258)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-aio-cpu` -> `v2.19.2-aio-cpu` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-aio-gpu-nvidia-cuda-11` ->
`v2.19.2-aio-gpu-nvidia-cuda-11` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-aio-gpu-nvidia-cuda-12` ->
`v2.19.2-aio-gpu-nvidia-cuda-12` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-cublas-cuda11-ffmpeg-core` ->
`v2.19.2-cublas-cuda11-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-cublas-cuda12-ffmpeg-core` ->
`v2.19.2-cublas-cuda12-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-cublas-cuda12-core` -> `v2.19.2-cublas-cuda12-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-ffmpeg-core` -> `v2.19.2-ffmpeg-core` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

###
[`v2.19.2`](https://togithub.com/mudler/LocalAI/releases/tag/v2.19.2)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.19.1...v2.19.2)

This release is a patch release to fix well known issues from 2.19.x

##### What's Changed

##### Bug fixes 🐛

- fix: pin setuptools 69.5.1 by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[https://github.com/mudler/LocalAI/pull/2949](https://togithub.com/mudler/LocalAI/pull/2949)
- fix(cuda): downgrade to 12.0 to increase compatibility range by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2994](https://togithub.com/mudler/LocalAI/pull/2994)
- fix(llama.cpp): do not set anymore lora_base by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2999](https://togithub.com/mudler/LocalAI/pull/2999)

##### Exciting New Features 🎉

- ci(Makefile): reduce binary size by compressing by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2947](https://togithub.com/mudler/LocalAI/pull/2947)
- feat(p2p): warn the user to start with --p2p by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2993](https://togithub.com/mudler/LocalAI/pull/2993)

##### 🧠 Models

- models(gallery): add tulu 8b and 70b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2931](https://togithub.com/mudler/LocalAI/pull/2931)
- models(gallery): add suzume-orpo by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2932](https://togithub.com/mudler/LocalAI/pull/2932)
- models(gallery): add archangel_sft_pythia2-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2933](https://togithub.com/mudler/LocalAI/pull/2933)
- models(gallery): add celestev1.2 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2937](https://togithub.com/mudler/LocalAI/pull/2937)
- models(gallery): add calme-2.3-phi3-4b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2939](https://togithub.com/mudler/LocalAI/pull/2939)
- models(gallery): add calme-2.8-qwen2-7b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2940](https://togithub.com/mudler/LocalAI/pull/2940)
- models(gallery): add StellarDong-72b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2941](https://togithub.com/mudler/LocalAI/pull/2941)
- models(gallery): add calme-2.4-llama3-70b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2942](https://togithub.com/mudler/LocalAI/pull/2942)
- models(gallery): add llama3.1 70b and 8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3000](https://togithub.com/mudler/LocalAI/pull/3000)

##### 📖 Documentation and examples

- docs: add federation by [@&#8203;mudler](https://togithub.com/mudler)
in
[https://github.com/mudler/LocalAI/pull/2929](https://togithub.com/mudler/LocalAI/pull/2929)
- docs: ⬆️ update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2935](https://togithub.com/mudler/LocalAI/pull/2935)

##### 👒 Dependencies

- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2936](https://togithub.com/mudler/LocalAI/pull/2936)
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2943](https://togithub.com/mudler/LocalAI/pull/2943)
- chore(deps): Bump grpcio from 1.64.1 to 1.65.1 in
/backend/python/openvoice by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2956](https://togithub.com/mudler/LocalAI/pull/2956)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/sentencetransformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2955](https://togithub.com/mudler/LocalAI/pull/2955)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/bark
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2951](https://togithub.com/mudler/LocalAI/pull/2951)
- chore(deps): Bump docs/themes/hugo-theme-relearn from `1b2e139` to
`7aec99b` by [@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2952](https://togithub.com/mudler/LocalAI/pull/2952)
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2959](https://togithub.com/mudler/LocalAI/pull/2959)
- chore(deps): Bump numpy from 1.26.4 to 2.0.1 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2958](https://togithub.com/mudler/LocalAI/pull/2958)
- chore(deps): Bump sqlalchemy from 2.0.30 to 2.0.31 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2957](https://togithub.com/mudler/LocalAI/pull/2957)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/vllm
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2964](https://togithub.com/mudler/LocalAI/pull/2964)
- chore(deps): Bump llama-index from 0.10.55 to 0.10.56 in
/examples/chainlit by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2966](https://togithub.com/mudler/LocalAI/pull/2966)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/common/template by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2963](https://togithub.com/mudler/LocalAI/pull/2963)
- chore(deps): Bump weaviate-client from 4.6.5 to 4.6.7 in
/examples/chainlit by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2965](https://togithub.com/mudler/LocalAI/pull/2965)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/transformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2970](https://togithub.com/mudler/LocalAI/pull/2970)
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in /examples/functions
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2973](https://togithub.com/mudler/LocalAI/pull/2973)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/diffusers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2969](https://togithub.com/mudler/LocalAI/pull/2969)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/exllama2 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2971](https://togithub.com/mudler/LocalAI/pull/2971)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/rerankers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2974](https://togithub.com/mudler/LocalAI/pull/2974)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/coqui by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2980](https://togithub.com/mudler/LocalAI/pull/2980)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/parler-tts by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2982](https://togithub.com/mudler/LocalAI/pull/2982)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/vall-e-x by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2981](https://togithub.com/mudler/LocalAI/pull/2981)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/transformers-musicgen by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2990](https://togithub.com/mudler/LocalAI/pull/2990)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/autogptq by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2984](https://togithub.com/mudler/LocalAI/pull/2984)
- chore(deps): Bump llama-index from 0.10.55 to 0.10.56 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2986](https://togithub.com/mudler/LocalAI/pull/2986)
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/mamba by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2989](https://togithub.com/mudler/LocalAI/pull/2989)
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2992](https://togithub.com/mudler/LocalAI/pull/2992)
- chore(deps): Bump langchain-community from 0.2.7 to 0.2.9 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2960](https://togithub.com/mudler/LocalAI/pull/2960)
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2961](https://togithub.com/mudler/LocalAI/pull/2961)
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/functions by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2975](https://togithub.com/mudler/LocalAI/pull/2975)
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2988](https://togithub.com/mudler/LocalAI/pull/2988)
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2987](https://togithub.com/mudler/LocalAI/pull/2987)
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2995](https://togithub.com/mudler/LocalAI/pull/2995)

##### Other Changes

- ci(Makefile): enable p2p on cross-arm64 builds by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2928](https://togithub.com/mudler/LocalAI/pull/2928)

**Full Changelog**:
https://github.com/mudler/LocalAI/compare/v2.19.1...v2.19.2

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40NDAuNyIsInVwZGF0ZWRJblZlciI6IjM3LjQ0MC43IiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIiwibGFiZWxzIjpbImF1dG9tZXJnZSIsInVwZGF0ZS9kb2NrZXIvZ2VuZXJhbC9ub24tbWFqb3IiXX0=-->
This commit is contained in:
TrueCharts Bot 2024-07-25 02:36:20 +02:00 committed by GitHub
parent 206b50796f
commit 93b7796520
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 8 additions and 8 deletions

View File

@ -33,4 +33,4 @@ sources:
- https://github.com/truecharts/charts/tree/master/charts/stable/local-ai
- https://hub.docker.com/r/localai/localai
type: application
version: 11.11.3
version: 11.11.4

View File

@ -5,15 +5,15 @@ image:
ffmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.1-ffmpeg-core@sha256:c63afdd8ede2b29b8a8fa15dc3b5370d33c28deab45d3334ff07064fe0682662
tag: v2.19.2-ffmpeg-core@sha256:cedf339779f3dec9f58d6033bc76c4ddd2581adcb942eec8e3e20ad85fff70b2
cublasCuda12Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.1-cublas-cuda12-core@sha256:89d0680ec8690fe53cbe4f1608fd514277a425e6dd5e69978d9ac9ee8d9f9d1e
tag: v2.19.2-cublas-cuda12-core@sha256:368ab8bdf7a48f9d6b77720a823aa57ee7c971d64425ab98fc902c859b567a91
cublasCuda12FfmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.1-cublas-cuda12-ffmpeg-core@sha256:fefb32ece781e700a43771c4b0108a3af0d5a7c0d5c1849c17d7f265e9408b3f
tag: v2.19.2-cublas-cuda12-ffmpeg-core@sha256:ffcef33926bb24f31aefe70bba84932968c509bd11a97ca21c1d178a793b8539
cublasCuda11Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
@ -21,19 +21,19 @@ cublasCuda11Image:
cublasCuda11FfmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.1-cublas-cuda11-ffmpeg-core@sha256:3b868811ca8ee589eb4ea86bcabf0e991794a57a2aa1aa25ab6c023e4ba1811d
tag: v2.19.2-cublas-cuda11-ffmpeg-core@sha256:9a571aa7aab8c182aa9a8f8583dbed78a29f4ff81ebc90ab7eff57a330c5b467
allInOneCuda12Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.1-aio-gpu-nvidia-cuda-12@sha256:07fc7e18e871b711f42688a6c98fdf588f5a8711a754b44c1afbda663cc2b35d
tag: v2.19.2-aio-gpu-nvidia-cuda-12@sha256:22734d5b39f10fa5463c67b69d36e75d81acf39b71cd06a5f29342ddc66f8c13
allInOneCuda11Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.1-aio-gpu-nvidia-cuda-11@sha256:78c99ca29bf1cccd3586b54ae8fe863a44f46e3e5b27e8e2e0d9b18e20e990dc
tag: v2.19.2-aio-gpu-nvidia-cuda-11@sha256:f27dcc1040654028b8314eed6c548b84b8d1e55bc2a2ff17923a15cc8e15b237
allInOneCpuImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.1-aio-cpu@sha256:f28abab3ab6a04a7e569cb824eb1e012312eeeab8cfaad4a69ef1ffe8910199c
tag: v2.19.2-aio-cpu@sha256:e272ca3b42eaa902d1a4fd521df5a01b1bbb62dd072a66ad6a5eab01e32c0b8c
securityContext:
container:
runAsNonRoot: false