This project is mirrored from https://github.com/metabase/metabase.
Pull mirroring updated .
- Jul 15, 2024
-
-
github-automation-metabase authored
* fix cache config migration fails with float setting on mysql (#45539) * update check sum of v50.2024-06-12T12:33:08 migration that was changed in #45539 (#45563) --------- Co-authored-by:
Ngoc Khuat <qn.khuat@gmail.com>
-
- Jul 10, 2024
-
-
github-automation-metabase authored
Co-authored-by:
Cal Herries <39073188+calherries@users.noreply.github.com>
-
github-automation-metabase authored
Co-authored-by:
Cal Herries <39073188+calherries@users.noreply.github.com>
-
- Jul 09, 2024
-
-
github-automation-metabase authored
Co-authored-by:
Cal Herries <39073188+calherries@users.noreply.github.com>
-
- Jul 05, 2024
-
-
github-automation-metabase authored
Co-authored-by:
Cal Herries <calherries@gmail.com>
-
- Jul 03, 2024
-
-
github-automation-metabase authored
* User Parameter Values Are Unique Per Dashboard (#45064) * User Parameter Values Are Unique Per Dashboard Fixes: #43001 Might be related: #44858 User Parameter Values previously stored just the `parameter-id` and `user-id` as data to look up the last used parameter value. This isn't sufficient as the parameter id is not guaranteed to be unique, in particular this is true when a dashboard is duplicated: the parameters are copied to the new dashboard without changing the parameter id at all. This means that we need to also store the dashboard-id in the user_parameter_value table and use that to update/get the last used value. The migration removes existing entries to the user_parameter_value table because I want a non-nullable constraint on the dashboard_id column, and existing entries will not have a value. The only way to backfill those values would be to look through every dashboard containing parameters, and for every parameter check for a matching ID. Even if you can do this, there's no way to disambiguate if the same paramter exists on 2 dashboards anyway, so one of them will be wrong. I think it's not worth the trouble considering that removing the entries in this table doesn't break anything; it just becomes a mild inconvenience that a user has to select a filter value again (since the dashboard will use the default value). * alter test to check for uniqueness across dashboard This test makes 2 dashboards with parameters of the same ID and asserts that changing the value on dashboard-a does not change the value on dashboard-b. * adjust migration to pass linter rules * remove the extra rollback on migration * Update src/metabase/models/user_parameter_value.clj Co-authored-by:
bryan <bryan.maass@gmail.com> * Update src/metabase/models/user_parameter_value.clj Co-authored-by:
bryan <bryan.maass@gmail.com> * adjust parameters in test so we don't get logged warnings * put update!/insert! in a transaction to avoid any race conditions --------- Co-authored-by:
bryan <bryan.maass@gmail.com> * change to proper migration IDs --------- Co-authored-by:
Adam James <adam.vermeer2@gmail.com> Co-authored-by:
bryan <bryan.maass@gmail.com>
-
- Jul 01, 2024
-
-
github-automation-metabase authored
Co-authored-by:
Alexander Solovyov <alexander@solovyov.net>
-
github-automation-metabase authored
Co-authored-by:
Ngoc Khuat <qn.khuat@gmail.com>
-
github-automation-metabase authored
* Wrap non-latin characters in a span specifying working font Fixes: #38753 CSSBox seems to have a bug where font fallback doesn't work properly. This is noticeable when a font does not contain glyphs that are present in the string being rendered. For example, Lato does not have many international characters, so the rendered version of tables (that show up in Slack messages) will not render properly, making the card unreadable. Since this appears to be a downstream bug, I've opted to work around this limitation by wrapping any non-latin characters in a <span> that specifies the font family to be sans-serif, which should contain the glyphs to properly render. This leaves Lato in place for other characters. For now, I figured it's worth trying this solution over using Noto for 2 reasons: - we can keep Lato, which has been the decided font style for the app for some time (this keeps things consistent where possible) - the Noto font containing all glyphs is quite a large font (>20mb, I think) and it would be nice to avoid bundling that if we can. * stub installed fonts test * typo * Do wrapping, but now per-string, and in Clojure data not html string I've decided that a reasonable solution is to still wrap strings containing non-lato characters. But it's not done with str/replace to the html string but rather in a postwalk over the Hiccup data prior to rendering. Seems like a decent compromise of issues without patching CSSBox or fixing upstream (may be good to do, but will take longer to get a fix in). * add test checking that glyphs render correctly * Add a test that directly checks the wrapping fn * Change the string to keep the linter quiet * Change how we check if string can be rendered to faster method, new Lato Font With Sanya's help, the way I check if a given string is renderable with Lato is now faster. Also use the full Lato font, not just the 'latin' lato so we can cover more chars * change lato so that it loads the fn even in a fresh test run Co-authored-by:
adam-james <21064735+adam-james-v@users.noreply.github.com> Co-authored-by:
Nemanja Glumac <31325167+nemanjaglumac@users.noreply.github.com>
-
- Jun 26, 2024
-
-
github-automation-metabase authored
Co-authored-by:
Noah Moss <32746338+noahmoss@users.noreply.github.com>
-
- Jun 20, 2024
-
-
github-automation-metabase authored
Co-authored-by:
Noah Moss <32746338+noahmoss@users.noreply.github.com>
-
- Jun 12, 2024
-
-
github-automation-metabase authored
* certain settings should be decrypted for cache config migration to run (#44048) * ~ fix test-migrations-for-driver! when rolling back and migrating again --------- Co-authored-by:
Alexander Solovyov <alexander@solovyov.net> Co-authored-by:
Cal Herries <calherries@gmail.com>
-
- Jun 07, 2024
-
-
John Swanson authored
Liquibase changed their boolean type in MySQL from `bit(1)` to `tinyint(4)` in version 4.25.1. Our JDBC driver does not recognize these as booleans, so we needed to migrate them to `bit(1)`s. As discussed [here](#36964), we changed all existing `boolean` types that were in the `001_update_migrations.yml` but not the SQL initialization file. For new installations, this works: things in the SQL initialization file get created with the `bit(1)` type. However, for existing installations, there's a potential issue. Say I'm on v42 and am upgrading to v49. In v43, a new `boolean` was added. In this case, I'll get the `boolean` from the liquibase migration rather than from the SQL initialization file, and it need to be changed to a `bit(1)`. I installed Metabase v41 with MySQL, migrated the database, and then installed Metabase v49 and migrated again. I made a list of all the columns that had the type `tinyint`: ``` mysql> SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, COLUMN_TYPE, COLUMN_DEFAULT, IS_NULLABLE FROM INFORMATION_SCHEMA.COLUMNS WHERE COLUMN_TYPE = 'tinyint' AND TABLE_SCHEMA='metabase_test'; +---------------+------------------------------+-------------------+-------------+----------------+-------------+ | TABLE_SCHEMA | TABLE_NAME | COLUMN_NAME | COLUMN_TYPE | COLUMN_DEFAULT | IS_NULLABLE | +---------------+------------------------------+-------------------+-------------+----------------+-------------+ | metabase_test | core_user | is_datasetnewb | tinyint | 1 | NO | | metabase_test | metabase_field | database_required | tinyint | 0 | NO | | metabase_test | metabase_fieldvalues | has_more_values | tinyint | 0 | YES | | metabase_test | permissions_group_membership | is_group_manager | tinyint | 0 | NO | | metabase_test | persisted_info | active | tinyint | 0 | NO | | metabase_test | report_card | dataset | tinyint | 0 | NO | | metabase_test | timeline | archived | tinyint | 0 | NO | | metabase_test | timeline | default | tinyint | 0 | NO | | metabase_test | timeline_event | archived | tinyint | 0 | NO | | metabase_test | timeline_event | time_matters | tinyint | NULL | NO | +---------------+------------------------------+-------------------+-------------+----------------+-------------+ 10 rows in set (0.01 sec) ``` Then wrote migrations. For each column, we: - turn it into a `bit(1)`, - re-set the previously existing default value, and - re-add the NOT NULL constraint, if applicable. --------- Co-authored-by:
SakuragiYoshimasa <ysrhsp@outlook.com>
-
- Jun 06, 2024
-
-
github-automation-metabase authored
* update translations for v50 * add Danish * update translations for v50 release * alphabetical order is nice Co-authored-by:
Ryan Laurie <30528226+iethree@users.noreply.github.com>
-
- Jun 05, 2024
-
-
github-automation-metabase authored
* Remove Google Analytics driver * Remove more GA-related tests * Un-remove tests that aren't related to GA Co-authored-by:
Cam Saul <1455846+camsaul@users.noreply.github.com> Co-authored-by:
Cam Saul <github@camsaul.com>
-
- May 29, 2024
-
-
metabase-bot[bot] authored
Co-authored-by:
Cal Herries <39073188+calherries@users.noreply.github.com> Co-authored-by:
Cal Herries <calherries@gmail.com>
-
- May 28, 2024
-
-
John Swanson authored
* Revert "[Feature branch] Make Trash Usable (#42339)" This reverts commit 7460cad8. * Revert "Update API Changelog (#42864) (#42865)" This reverts commit dbd6fd71.
-
metabase-bot[bot] authored
backported "Migrate uploads settings to the db-level behind the scenes, so the uploads DB can be set by the config file" (#43198) Co-authored-by:Cal Herries <calherries@gmail.com>
-
- May 21, 2024
-
-
John Swanson authored
This is a bit painful. I merged this change, but realized we need to backport the fix to v49. However: - we don't want to have two versions of the migration (one with a v49 id, one with a v50 id) because then if someone upgrades to 50, then downgrades to 49, the `rollback` will run and change the type back, leading to a bug. - we don't want to push a v51 changeSet ID to v49 or v50, because we give the user a helpful notice when their database needs a downgrade. We do this by checking for the latest *executed* migration in the database and comparing it to the latest migration that Liquibase knows about, and making sure the executed isn't bigger than the known (e.g. you can't have executed a v51 migration if it isn't in the local migration yaml). That would all work fine, except that then we want to tell you how to downgrade your database, and we use the latest-executed version for that. So if, for example, someone upgraded from 48 to 49 and got a v51 changeset, then downgraded back to 48, they would get an error telling them to run the *v51* jar to roll back their DB. In this case though, I think it's fine to just move the migration around to v49, then we can backport it to 49 and 50.
-
- May 20, 2024
-
-
metabase-bot[bot] authored
* Make Metabot Cool Again * update tests Co-authored-by:
Ryan Laurie <30528226+iethree@users.noreply.github.com>
-
- May 17, 2024
-
-
metabase-bot[bot] authored
Co-authored-by:
Nemanja Glumac <31325167+nemanjaglumac@users.noreply.github.com>
-
Ngoc Khuat authored
-
Sloan Sparger authored
* Migration to add `trashed_from_*` (#41529) We want to record where things were trashed *from* for two purposes: - first, we want to be able to put things back if they're "untrashed". - second, we want to be able to enforce permissions *as if* something is still in its previous location. That is, if we trash a card or a dashboard from Collection A, the permissions of Collection A should continue to apply to the card or dashboard (e.g. in terms of who can view it). To achieve this, collections get a `trashed_from_location` (paralleling their `location`) and dashboards/cards get a `trashed_from_collection_id` (paralleling their `collection_id`). * Create the trash collection on startup (#41535) * Create the trash collection on startup The trash collection (and its descendants) can't have its permissions modified. Note that right now, it's possible to move the Trash collection. I'll fix this when I implement the API for moving things to the Trash. * s/TRASH_COLLECTION_ID/trash-collection-id/g * Add a comment to explain null comparison * Move archived items to the trash (#41543) This PR is probably too big. Hopefully documenting all the changes made here will make it easier to review and digest. So: Tables with a `collection_id` had `ON DELETE SET NULL` on the foreign key. Since we only deleted collections in testing and development, this didn't really affect anything. But now that we are actually deleting collections in production, it's important that nothing accidentally get moved to the root collection, which is what `SET NULL` actually does. When we get an API request to update the `archived` flag on a card, collection, or dashboard, we either move the item *to* the trash (if `archived` is `true`) or *from* the trash (if `archived` is `false`). We set the `trashed_from_collection_id` flag as appropriate, and use it when restoring an item if possible. Because moving something to the trash by itself would have permissions implications (since permissions are based on the parent collection) we need to change the way permissions are enforced for trashed items. First, we change the definition of `perms-objects-set-for-parent-collection` so that it returns the permission set for the collection the object was *trashed from*, if it is archived. Second, when listing objects inside the Trash itself, we need to check permissions to make sure the user can actually read the object - usually, of course, if you can read a collection you can read the contents, but this is not true for the trash so we need to check each one. In this case we're actually a little extra restrictive and only return objects the user can *write*. The reasoning here is that you only really want to browse the Trash to see things you could "act on" - unarchive or permanently delete. So there's no reason to show you things you only have read permissions on. Because previously the Collections API expected archived collections to live anywhere, not just in a single tree based in the Trash, I needed to make some changes to some API endpoints. This endpoint still takes an `exclude-archived` parameter, which defaults to `false`. When it's `false`, we return the entire tree including the Trash collection and archived children. When it's `true`, we exclude the Trash collection (and its subtree) from the results. Previously, this endpoint took an `archived` parameter, which defaulted to `false`. Thus it would only return non-archived results. This is a problem, because we want the frontend to be able to ask for the contents of the Trash or an archived subcollection without explicitly asking for archived results. We just want to treat these like normal collections. The change made to support this was to just default `archived` to the `archived` status of the collection itself. If you're asking for the items in an archived collection, you'll only get archived results. If you're asking for the items in a non-archived collection, you'll only get unarchived results. This is, for normal collections, the same thing as just returning all results. But see the section on namespaced collections for details on why we needed this slightly awkward default. No change - this endpoint still takes an `archived` parameter. When `archived=true`, we return the Trash collection, as it is in the root collection. Otherwise, we don't. * Make Trash Usable - UI (#41666) * Remove Archive Page + Add `/trash` routing (#42226) * removes archive page and related resources, adds new /trash route for trash collection, adds redirects to ensure consistent /trash routing instead of collection path * fixes unit + e2e tests, corrects links generated for trash collection to use /trash over /collect/:trashId route * updates comment * Serialize trash correctly (#42345) Also, create the Trash in a migration rather than on startup. Don't set a specific trash collection.id, instead just select based on the `type` when necessary. * Fix collection data for trashed items (#42284) * Fix collection IDs for trashed items When something is in the trash, we need to check permissions on the `trashed_from_collection_id` rather than the `collection_id`. We were doing this. However, we want the actual collection data on the search result to represent the actual collection it's in (the trash). I added the code to do this, a test to make sure it works as intended, and a comment to explain what we're doing here and why. * Refactor permission for trashed_from_collection_id Noah pointed out that the logic of "what collection do I check for permissions" was getting sprinkled into numerous places, and it felt a little scary. In fact, there was a bug in the previous implementation. If you selected a collection with `(t2/select [:model/Collection ...])`, selecting a subset of columns, and *didn't* include the `trashed_from_collection_id` in that set of columns, then called `mi/can-write?` on the result, you'd get erroneous results. Specifically, we'd return `nil` (representing the root collection). I think this is a reasonable fix, though I'm pretty new to using fancy multimethods with `derive` and such. But basically, I wanted a way to annotate a model to say "check these permissions using `:trashed_from_collection_id` if the item is archived." And in this case, we'll throw an exception if `:trashed_from_collection_id` is not present, just like we currently throw an exception if `:collection_id` is not present when checking permissions the normal way. * Move existing archived items to the trash (#42241) Move everything that's archived directly to the trash. It's not ideal because we're wiping out the existing collection structure, but the existing Archive page also has no collection structure. * lint fixes from rebase * Fix backend tests (#42506) `can_write` on collection items was wrong because we didn't have a `trashed_from_collection_id` - the refactor to ensure we didn't make that mistake worked!
* Add an /api/collections/trash endpoint (#42476) This fetches the Trash in exactly the same way as if we'd fetched it with `/api/collection/:id` with the Trash ID. I hadn't realized that the frontend was doing this with the previously hardcoded Trash ID. * Make Trash Usable - Dynamic Trash Collection Id (#42532) * wip * fixes hardcoded reference to trash id in sidebar * remove TRAHS_COLLECTION * fixes line and e2e tests * fix invert logic mistake and fixes lint * Make Trash Usable - Search Filter UI (#42402) * adds filtering for archived items on the search page * fix typing mistake * Make Trash Usable - Bug Bash 1 (#42541) * disables reordering columns in archived questions, disables modifying archived question name in question header, collection picker no longer defaults to archived item, keeps trashed collection from appearing collection picker search results, shops showing empty area in trashed dashboard info sidebar, disables uploading csvs to trashed collections * impls pr feedback * fix e2e failure * Localize the name of the Trash (#42514) There are at least two spots where we can't just rely on the after-select hook, and select the collection name directly from the database: the Search and Collection API. In these cases we need to call `collection/maybe-localize-trash-name`, which will localize the name if the passed Collection is the Trash collection. * Update migration IDs Migration IDs need to be in order. * Fix failing mariadb:latest test (#42608) Hooooly cow. Glad to finally track this down. The issue was that booleans come back from MariaDB as NON BOOLEANS, and clojure says 0 is truthy, and together that makes for FUN TIMES. We need to turn MariaDB's bits to booleans before we can work with them. * Make Trash Usable: Add `can_restore` to API endpoints (#42654) Rather than having two separate implementations of the `can_restore` hydration for cards and dashboards, I set up automagic hydration for the `trashed_from_collection_id`. Then the hydration method for `can_restore` can just first hydrate `trashed_from_collection` and then operate based on that. * fix can_restore for collections trashed from root * Fix `trashed_from_location` for trashed subcols We were setting `trashed_from_location` on subcollections to the `trashed_from_location` of the ancestor that was trashed - which is wrong. This would have caused bugs where collections would be restored to the root collection, regardless of where they were originally trashed from. --------- Co-authored-by:Sloan Sparger <sloansparger@gmail.com> * Fix Trash rollbacks (#42710) * Fix Trash rollbacks There were a few errors in my initial implementation. First, we can't simply assume that `trashed_from_location` and `trashed_from_collection_id` are set for anything in the archive. They should be set for things *directly* trashed, but not for things trashed as part of a collection. Therefore, we need to set `location` and `collection_id` to something like "if the item is in the trash, then the `trashed_from` - otherwise, don't modify it". Second, because `trashed_from_collection_id` is not a foreign key but `collection_id` is, we have a potential problem. If someone upgrades, Trashes a dashboard, then Trashes and permanently deletes the collection that dashboard _was_ in, then downgrades, how do we move the dashboard "back"? What do we set the dashboard's `collection_id` to? The solution here is that when we downgrade, we don't actually move dashboards, collections, or cards out of the Trash collection. Instead we just make Trash a normal collection and leave everything in it. * Make Trash Usable - Bug Bash 2 (#42787) * wip * fixes access to property on null value * pr clean up * more clean up * Fix up tests - permissions will check the archived from location. So recents need to select :report_card.trashed_from_collection_id and :dash.trashed_from_collection_id to satisfy mi/can-read? - some commented out code with `(def wtf (mt/user-http-request ...))`. Restore tests. - probably commented out because the recent views come back in an order but we don't care about the order. And it was asserting against an ordered collection. Just go from [{:id <id> :model <model>} ...] to a map of {<id> <model>} and then our comparison works fine and is not sensitive to order. * put try/finally around read-only-mode ensure the setting read-only-mode! to false happens in a finally block. Also, set read-only-mode to false in ```clojure (deftest read-only-login-test (try (cloud-migration/read-only-mode! true) (mt/client :post 200 "session" (mt/user->credentials :rasta)) (finally (cloud-migration/read-only-mode! false)))) ``` But worryingly, I think we have a lurking problem that I'm not sure why we haven't hit yet. We run tests in parallel and then put all of the non-parallel tests on the same main thread. And when one of these puts the app in read-only-mode, the parallel tests will fail. HOPEFULLY since they are parallel they won't be hitting the app-db necessarily, but there could be lots of silly things that break. --------- Co-authored-by:
John Swanson <john.swanson@metabase.com> Co-authored-by:
dan sutton <dan@dpsutton.com>
-
Luiz Arakaki authored
* update metabase analytics with cache dashboardtab and subscriptions and alerts dashboard * update yamls without view_count * fix metabase analytics namespace
-
- May 16, 2024
-
-
Filipe Silva authored
* feat: cloud migration endpoints * fix: don't leave open conn behind when checking migrations `unrun-migrations` would open a new connection but not close it. Since `with-liquibase` is happy enough using a data-source the fix is straightforward. You can verify this by running the following code: ``` (comment (require '[metabase.cmd.dump-to-h2 :as dump-to-h2] '[metabase.analytics.prometheus :as prometheus]) (defn dump [] (-> (str "cloud_migration_dump_" (random-uuid) ".mv.db") io/file .getAbsolutePath dump-to-h2/dump-to-h2!)) (defn busy-conns [] (-> (prometheus/connection-pool-info) first second :numBusyConnections)) ;; each dump leaves behind 1 busy conn (busy-conns) ;; => 0 (dump) (busy-conns) ;; => 1 (dump) (busy-conns) ;; => 2 (dump) (busy-conns) ;; => 3 ) ``` * fix: flush h2 before returning from dump * rfct: move code to models.cloud-migration * test: add login while on read-only test * fix: assorted cloud_migration fixes from review * test: allow overriding uploaded dump * fix: add UserParameterValue to read-only exceptions Also make the list a little bit nicer. * fix: only block dumped tables on read-only * fix: recover on startup if read-only was left on * feat: block migration when hosted already * test: test settings for migration * feat: cloud migration supports retries and multipart * test: sane dev defaults for migration * fix: upload 100% shouldn't be migration 100% * chore: make up a new migration id after merge * Cloud Migration FE (#42542) * it's a start * ui wip * wip * dynamic polling intervals, and custom modal for migrate confirmation modal * cleans out most of the remainig UI TODOs * adding progress component * impls team feedback * makes component more testable, starts some a unit test for the CloudPanel * finish unit testing * reverts api changes * update progress styling * fix type issues * fix e2e failure, fix feature unit tests by holding last migration state in fetchMock if more requests than expected happen at the end of a test, remove white spacing change in clj file * second pass at fixing tests * fix copy from ready-only to read-only * copy fix * Update frontend/src/metabase/admin/settings/components/CloudPanel/MigrationError.tsx Co-authored-by:
Raphael Krut-Landau <raphael.kl@gmail.com> * Update frontend/src/metabase/admin/settings/components/CloudPanel/MigrationInProgress.tsx Co-authored-by:
Raphael Krut-Landau <raphael.kl@gmail.com> * adding e2e test * pr feedback --------- Co-authored-by:
Nick Fitzpatrick <nickfitz.582@gmail.com> Co-authored-by:
Raphael Krut-Landau <raphael.kl@gmail.com> --------- Co-authored-by:
Sloan Sparger <sloansparger@users.noreply.github.com> Co-authored-by:
Nick Fitzpatrick <nickfitz.582@gmail.com> Co-authored-by:
Raphael Krut-Landau <raphael.kl@gmail.com>
-
Ryan Laurie authored
* Basic Upsell System Setup * Hosting Upsell * add upsell to updates page * update utms * update copy, utms and text
-
adam-james authored
* Area Bar Combo Stacked Viz Settings Migration Per: https://www.notion.so/metabase/Migration-spec-e2732e79215a4a5fa44debb272413db9 This addresses the necessary migrations for area/bar/combo stacked viz settings. * Custom migration test works * Check that non area/bar/combo charts also remove stack_display
-
- May 15, 2024
-
-
Ngoc Khuat authored
An one time job to init send-pulse trigger and migration down to clean up send-pulse triggers (#42316) * Add a job to init send pulse trigger only once * Send pulse triggers should respect report timezone (#42502)
-
- May 13, 2024
-
-
Noah Moss authored
-
Nicolò Pretto authored
-
- May 09, 2024
-
-
Ngoc Khuat authored
-
- May 08, 2024
-
-
Cal Herries authored
-
Alexander Solovyov authored
-
- May 07, 2024
-
-
Ngoc Khuat authored
-
- May 01, 2024
-
-
Ngoc Khuat authored
-
- Apr 30, 2024
-
-
adam-james authored
* Store Parameter Values Set by User on a per-user basis This is a WIP for #28181 and the notion doc: https://www.notion.so/metabase/Tech-Maintain-user-level-state-in-dashboards-for-filters-fc16909a3216482f896934dd94b54f9a Still to do: - [ ] validate the table/model design - [ ] hook up the endpoints mentioned in the doc (2 in api/dashboard) - [ ] return the user specific parameter values on /api/dashboard/:dashboardID endpoints - [ ] write a few tests to capture the intent of this feature * Accidentally deleted a digit in the change ID timestamp * first pass at writing user param values to the db. It's in a doseq here which is probably not the correct way to go yet, but it's a step in the right direction. * Hydrate dashboard response with `:last_used_param_values` key If the user has previously set a parameter's value, it will show up in the map under that parameter id. If the user has no parameter value set, that entry will be 'null'. * Use proper fn name * Only save or retreive user params if there's a current user id * Add model to necessary lists * Only run query when there are parameter ids to run it with * Add a test to confirm that last_used_param_values works * Add models test namespace for CRUD test * The hydration is checked in the dashboard api test already
-
- Apr 29, 2024
-
-
-
Noah Moss authored
-
- Apr 26, 2024
-
-
Aleksandr Lesnenko authored
Co-authored-by:
Emmad Usmani <emmadusmani@berkeley.edu> Co-authored-by:
Adam James <adam.vermeer2@gmail.com> Co-authored-by:
Mark Bastian <markbastian@gmail.com> Co-authored-by:
Jesse Devaney <22608765+JesseSDevaney@users.noreply.github.com> Co-authored-by:
Anton Kulyk <kuliks.anton@gmail.com>
-
Alexander Solovyov authored
-