Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Jun 20, 2024
  2. Jun 18, 2024
  3. Jun 13, 2024
  4. Jun 12, 2024
  5. Jun 11, 2024
    • bryan's avatar
      Make recents understand context (#43478) · 7b849da3
      bryan authored
      
      * adds the endpoint and a test
      
      * add recents endpoint, todo: tests
      
      * add post recents endpoint
      
      * return recent views for both lists, but make the endpoint work
      
      * Make recent-views context aware
      
      - pruning is context aware, only checks for the recently-inserted
        context
      - Adds endpoints:
          - POST to create selection recents,
          - GET activity/recents
            - requres context query param to be one of:
            - all, views, selections
      
      - Adds context arg to update-users-recent-views!
      - Cleans up arg schema for update-users-recent-views
      
      * impl GET api/activity/recents
      
      - return recent-selections and recent-views from
      - send context == collection from pinned card reads
      
      * update callsites of recent-views/update-users-recent-views!
      
      - to use :view param where necessary
      
      * fixes models/recent-view tests
      
      * adds more activity/recent-view tests
      
      * wip
      
      - fix whitespace linter
      
      * Fix command palette e2e test
      
      - reuse util snake-keys
      
      * updates fe to use new recents endpoints
      
      * fixes fe type issue
      
      * snake-keys -> deep-snake-keys
      
      - I've been betrayed by lsp rename
      
      * snake-keys -> deep-snake-keys
      
      - I've been betrayed by lsp rename
      
      * log selection events for created collections
      
      * mysql doesn't allow default values for TEXT types
      
      * log a recent view on data-picker selection
      
      * decouple view-log context from card-event view context
      
      * fix a doc typo
      
      * stop double logging recent-views on POST
      
      * maybe fixes some tests
      
      * some e2e fixes
      
      * fix mysterious divide by zero during score search
      
      * fix divide by zero possibilities everywhere in score-items
      
      metabase.api.activity/score-items used to throw when there weren't any
      items being scored (even though there's a `(when (seq items) ...)` check)
      
      * more test fixes
      
      * fix more e2e tests, + rename endpoint in tests
      
      * fix oopsie
      
      * fixes a few more tests
      
      * address review comments/questions
      
      * allow for a comma delimited list in qps like ?context=views,selections
      
      returns all recent view items with those contexts, most recent first
      under the `:recents` key.
      
      * refactors FE around new endpoint
      
      * fixes for unit tests
      
      * use ms/QueryVectorOf for context
      
      * fix models/recent_views tests
      
      * use multiple query params instead of comma delimited value on FE
      
      * ignore timestamp when deduping recents
      
      * review comments + test fixing
      
      * update docstring
      
      * fix api/activity_test s
      
      * actually dedupe
      
      * add test for deduping by context
      
      * e2e fix: shows up-to-date list of recently viewed items after another page is visited
      
      * e2e fix: should undo the question replace action
      
      * e2e fix: should replace a dashboard card question (metabase#36984)
      
      * e2e fix: should preselect the most recently visited dashboard
      
      * fix 6 more e2e tests
      
      * fixes fe type check and unit failure
      
      * renames unit test mocking function
      
      * fix a flaky e2e test
      
      * widen Item to accept str or kw where sensible
      
      - allow strings or keywords for moderated_status, and authority_level
      
      * simplify impl + add test
      
      * add view-log to events schema
      
      * add collection context to view log
      
      * fix the final 2 failing e2e tests
      
      * click dashboard tab when the user has can-read? on recent entities
      
      ---------
      
      Co-authored-by: default avatarSloan Sparger <sloansparger@gmail.com>
      Unverified
      7b849da3
    • John Swanson's avatar
      Improve the Trash data model (#42845) · 7374e61e
      John Swanson authored
      
      * Improve the Trash
      
      Ok, so I had a realization at the PERFECT time, immediately after the RC
      cutoff. Great job, brain!
      
      Here's the realization. For the Trash, we need to keep track of two
      things:
      
      - where the item actually is located in the hierarchy, and
      
      - what collection we should look at to see what permissions apply to the
      item.
      
      For example, a Card might be in the Trash, but we need to look at
      Collection 1234 to see that a user has permission to Write that card.
      
      My implementation of this was to add a column,
      `trashed_from_collection_id`, so that we could move a Card or a
      Dashboard to a new `collection_id`, but keep track of the permissions we
      actually needed to check.
      
      So:
      
      - `collection_id` was where the item was located in the collection
      hierarchy, and
      
      - `trashed_from_collection_id` was where we needed to look to check
      permissions.
      
      Today I had the realization that it's much, much more important to get
      PERMISSIONS right than to get collection hierarchy right. Like if we
      mess up and show something as in the Trash when it's not in the Trash,
      or show something in the wrong Collection - that's not great, sure. But
      if we mess up and show a Card when we shouldn't, or show a Dashboard
      when we shouldn't, that's Super Duper Bad.
      
      So the problem with my initial implementation was that we needed to
      change everywhere that checked permissions, to make sure they checked
      BOTH `trashed_from_collection_id` and `collection_id` as appropriate.
      
      So... there's a much better solution. Instead of adding a column to
      represent the *permissions* that we should apply to the dashboard or
      card, add a column to represent the *location in the hierarchy* that
      should apply to the dashboard or the card.
      
      We can simplify further: the *only time* we want to display something in
      a different place in the hierarchy than usual is when it was put
      directly into the trash. If you trash a dashboard as a part of a
      collection, then we should display it in that collection just like
      normal.
      
      So, we can do the following:
      
      - add a `trashed_directly` column to Cards and Dashboards, representing
      whether they should be displayed in the Trash instead of their actual
      parent collection
      
      - use the `collection_id` column of Cards and Dashboards without
      modification to represent permissions.
      
      There's one main downside of this approach. If you trash a dashboard,
      and then delete the collection that the dashboard was originally in,
      what do we do with `dashboard.collection_id`?
      
      - we have to change it, because it's a foreign key
      
      - we can't set it to null, because that represents the root collection
      
      In this initial implementation, I've just cascaded the delete: if you
      delete a dashboard and then delete a collection, the dashboard will be
      deleted. This is not ideal. I'm not totally sure what we should do in
      this situation.
      
      * Rip out all the `trashed_from_collection_id`
      
      * Migration to delete trashed_from_collection_id
      
      * fixes
      
      * don't move collections
      
      And don't allow deleting collections
      
      * only show cards/dashboards with write perms
      
      * Show the correct archived/unarchived branch
      
      * some cleanup
      
      * add a todo for tomorrow
      
      * Fix for yesterday's TODO
      
      * more wip
      
      * refactor
      
      * memoize collection info
      
      * move around memoization a bit
      
      * fix schema migration test
      
      * oops, delete server.middleware.memo
      
      * Use a migration script (postgres only for now)
      
      * Fix some tests
      
      * remove n+1 queries in `collection-can-restore`
      
      * fix test
      
      * fix more tests, and x-db migration script
      
      * fix h2 rollback
      
      * fix mysql/mariadb migration
      
      * lint
      
      * fix some mariadb/mysql tests
      
      * fix h2 rollback
      
      * Fix mysql rollback
      
      * Fix Postgres migration
      
      * "Real" `trash_operation_id` UUIDs from migration
      
      * fix mariadb migration
      
      * Separate MySQL/MariaDB migrations
      
      * trashed directly bit->boolean
      
      * Remove `trashed_from_*` from migrations
      
      * Rename `api/updates-with-trashed-directly`
      
      Previously named `move-on-archive-or-unarchive`, which was no longer
      accurate since we're not moving anything!
      
      * Add `can_delete`
      
      * Delete test of deleted code
      
      * Can't move anything to the Trash
      
      The Trash exists as a real collection so that we can list items in and
      such without needing special cases. But we don't want anything to
      _actually_ get moved to it.
      
      * integrates can_delete flag on FE
      
      * Update src/metabase/models/collection.clj
      
      Co-authored-by: default avatarNoah Moss <32746338+noahmoss@users.noreply.github.com>
      
      * Update src/metabase/search/impl.clj
      
      Co-authored-by: default avatarNoah Moss <32746338+noahmoss@users.noreply.github.com>
      
      * Better name for `fix-collection-id`
      
      Also consolidated it into a single function. It takes the
      trash-collection-id as an argument to prevent a circular dependency.
      
      * s/trashed/archived/ on the backend
      
      In the product, we want to move to "trashed" as the descriptor for
      things that are currently-known-as-archived.
      
      It'd be nice to switch to that on the backend as well, but that'll
      require quite a lot of churn on the frontend as we'd need to change the
      API (or have an awkward translation layer, where we call it `trashed` on
      the backend, turn it into `archived` when sending data out via the API,
      and then the FE turns around and called it "trashed" again).
      
      For now, let's just be consistent on the backend and call it `archived`
      everywhere. So:
      
      - `archived_directly` instead of `trashed_directly`, and
      - `archive_operation_id` instead of `trash_operation_id`
      
      * Fix up a couple docstrings
      
      * Rename `visible-collection-ids` args
      
      for `collection/permissions-set->visible-collection-ids`:
      
      - `:include-archived` => `:include-archived-items`
      
      - `:include-trash?` => `:include-trash-collection`
      
      * select affected-collection-ids in the tx
      
      * Stop dealing with `:all` visible collection ids
      
      This used to be a possible return value for
      `permissions-set->visible-collection-ids` indicating that the user could
      view all collections. Now that function always returns a set of
      collection IDs (possibly including "root" for the root collection) so we
      don't need to deal with that anymore.
      
      * Don't use separate hydration keys for collections
      
      Dispatch on the model of each item (but still keep them in batches for
      efficiency.)
      
      * vars for collectable/archived-directly models
      
      * Round up loose `trashed_directly`s
      
      * Fix a test
      
      * Use `clojure.core.memoize` instead of handrolled
      
      It's slightly different (TTL instead of exactly per-request) but that
      should be fine.
      
      * FE lint fixes
      
      * :sob:
      
       e2e test fix
      
      ---------
      
      Co-authored-by: default avatarSloan Sparger <sloansparger@gmail.com>
      Co-authored-by: default avatarNoah Moss <32746338+noahmoss@users.noreply.github.com>
      Unverified
      7374e61e
  6. Jun 10, 2024
  7. Jun 06, 2024
    • SakuragiYoshimasa's avatar
      Fix remaining `tinyint` booleans in MySQL (#43296) · 49b126ef
      SakuragiYoshimasa authored
      
      * Correct type for `report_card.dataset`
      
      This was missed in the original migration of boolean types to
      ${boolean.type} in MySQL/MariaDB, and then missed again by me when I
      migrated `collection_preview` over a week ago.
      
      * Change all boolean types to `bit(1)` in MySQL
      
      Liquibase changed their boolean type in MySQL from `bit(1)` to
      `tinyint(4)` in version 4.25.1. Our JDBC driver does not recognize these
      as booleans, so we needed to migrate them to `bit(1)`s.
      
      As discussed [here](#36964), we
      changed all existing `boolean` types that were in the
      `001_update_migrations.yml` but not the SQL initialization file.
      
      For new installations, this works: things in the SQL initialization file
      get created with the `bit(1)` type.
      
      However, for existing installations, there's a potential issue. Say I'm
      on v42 and am upgrading to v49. In v43, a new `boolean` was added.
      
      In this case, I'll get the `boolean` from the liquibase migration rather
      than from the SQL initialization file, and it need to be changed to a
      `bit(1)`.
      
      I installed Metabase v41 with MySQL, migrated the database, and then
      installed Metabase v49 and migrated again. I made a list of all the
      columns that had the type `tinyint`:
      
      ```
      mysql> SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, COLUMN_TYPE,        COLUMN_DEFAULT, IS_NULLABLE FROM INFORMATION_SCHEMA.COLUMNS WHERE COLUMN_TYPE = 'tinyint' AND TABLE_SCHEMA='metabase_test';
      +---------------+------------------------------+-------------------+-------------+----------------+-------------+
      | TABLE_SCHEMA  | TABLE_NAME                   | COLUMN_NAME       | COLUMN_TYPE | COLUMN_DEFAULT | IS_NULLABLE |
      +---------------+------------------------------+-------------------+-------------+----------------+-------------+
      | metabase_test | core_user                    | is_datasetnewb    | tinyint     | 1              | NO          |
      | metabase_test | metabase_field               | database_required | tinyint     | 0              | NO          |
      | metabase_test | metabase_fieldvalues         | has_more_values   | tinyint     | 0              | YES         |
      | metabase_test | permissions_group_membership | is_group_manager  | tinyint     | 0              | NO          |
      | metabase_test | persisted_info               | active            | tinyint     | 0              | NO          |
      | metabase_test | report_card                  | dataset           | tinyint     | 0              | NO          |
      | metabase_test | timeline                     | archived          | tinyint     | 0              | NO          |
      | metabase_test | timeline                     | default           | tinyint     | 0              | NO          |
      | metabase_test | timeline_event               | archived          | tinyint     | 0              | NO          |
      | metabase_test | timeline_event               | time_matters      | tinyint     | NULL           | NO          |
      +---------------+------------------------------+-------------------+-------------+----------------+-------------+
      10 rows in set (0.01 sec)
      ```
      
      Then wrote migrations. For each column, we:
      
      - turn it into a `bit(1)`,
      
      - re-set the previously existing default value, and
      
      - re-add the NOT NULL constraint, if applicable.
      
      * Change author and add missing `dbms`
      
      ---------
      
      Co-authored-by: default avatarJohn Swanson <john.swanson@metabase.com>
      Unverified
      49b126ef
    • Ryan Laurie's avatar
      Update translations for v50 and add Danish Language Support (#43314) · 2cdae6bc
      Ryan Laurie authored
      * update translations for v50
      
      * add Danish
      
      * update translations for v50 release
      
      * alphabetical order is nice
      Unverified
      2cdae6bc
  8. May 30, 2024
  9. May 28, 2024
  10. May 27, 2024
  11. May 21, 2024
    • John Swanson's avatar
      Fix `report_card.collection_preview` in v49 (#42950) · 1279a73a
      John Swanson authored
      This is a bit painful. I merged this change, but realized we need to
      backport the fix to v49. However:
      
      - we don't want to have two versions of the migration (one with a v49
      id, one with a v50 id) because then if someone upgrades to 50, then
      downgrades to 49, the `rollback` will run and change the type back,
      leading to a bug.
      
      - we don't want to push a v51 changeSet ID to v49 or v50, because we
      give the user a helpful notice when their database needs a downgrade.
      We do this by checking for the latest *executed* migration in the
      database and comparing it to the latest migration that Liquibase knows
      about, and making sure the executed isn't bigger than the known (e.g.
      you can't have executed a v51 migration if it isn't in the local
      migration yaml). That would all work fine, except that then we want to
      tell you how to downgrade your database, and we use the latest-executed
      version for that. So if, for example, someone upgraded from 48 to 49 and
      got a v51 changeset, then downgraded back to 48, they would get an error
      telling them to run the *v51* jar to roll back their DB.
      
      In this case though, I think it's fine to just move the migration around
      to v49, then we can backport it to 49 and 50.
      Unverified
      1279a73a
  12. May 20, 2024
  13. May 17, 2024
    • Nemanja Glumac's avatar
    • Ngoc Khuat's avatar
      Add task history status (#42372) · 29a1713d
      Ngoc Khuat authored
      Unverified
      29a1713d
    • Sloan Sparger's avatar
      [Feature branch] Make Trash Usable (#42339) · 7460cad8
      Sloan Sparger authored
      * Migration to add `trashed_from_*` (#41529)
      
      We want to record where things were trashed *from* for two purposes:
      
      - first, we want to be able to put things back if they're "untrashed".
      
      - second, we want to be able to enforce permissions *as if* something is
      still in its previous location. That is, if we trash a card or a
      dashboard from Collection A, the permissions of Collection A should
      continue to apply to the card or dashboard (e.g. in terms of who can
      view it).
      
      To achieve this, collections get a `trashed_from_location` (paralleling
      their `location`) and dashboards/cards get a
      `trashed_from_collection_id` (paralleling their `collection_id`).
      
      * Create the trash collection on startup (#41535)
      
      * Create the trash collection on startup
      
      The trash collection (and its descendants) can't have its permissions
      modified.
      
      Note that right now, it's possible to move the Trash collection. I'll
      fix this when I implement the API for moving things to the Trash.
      
      * s/TRASH_COLLECTION_ID/trash-collection-id/g
      
      * Add a comment to explain null comparison
      
      * Move archived items to the trash (#41543)
      
      This PR is probably too big. Hopefully documenting all the changes made here will make it easier to review and digest. So:
      
      Tables with a `collection_id` had `ON DELETE SET NULL` on the foreign key. Since we only deleted collections in testing and development, this didn't really affect anything. But now that we are actually deleting collections in production, it's important that nothing accidentally get moved to the root collection, which is what `SET NULL` actually does.
      
      When we get an API request to update the `archived` flag on a card, collection, or dashboard, we either move the item *to* the trash (if `archived` is `true`) or *from* the trash (if `archived` is `false`). We set the `trashed_from_collection_id` flag as appropriate, and use it when restoring an item if possible.
      
      Because moving something to the trash by itself would have permissions implications (since permissions are based on the parent collection) we need to change the way permissions are enforced for trashed items.
      
      First, we change the definition of `perms-objects-set-for-parent-collection` so that it returns the permission set for the collection the object was *trashed from*, if it is archived.
      
      Second, when listing objects inside the Trash itself, we need to check permissions to make sure the user can actually read the object - usually, of course, if you can read a collection you can read the contents, but this is not true for the trash so we need to check each one. In this case we're actually a little extra restrictive and only return objects the user can *write*. The reasoning here is that you only really want to browse the Trash to see things you could "act on" - unarchive or permanently delete. So there's no reason to show you things you only have read permissions on.
      
      Because previously the Collections API expected archived collections to live anywhere, not just in a single tree based in the Trash, I needed to make some changes to some API endpoints.
      
      This endpoint still takes an `exclude-archived` parameter, which defaults to `false`. When it's `false`, we return the entire tree including the Trash collection and archived children. When it's `true`, we exclude the Trash collection (and its subtree) from the results.
      
      Previously, this endpoint took an `archived` parameter, which defaulted to `false`. Thus it would only return non-archived results. This is a problem, because we want the frontend to be able to ask for the contents of the Trash or an archived subcollection without explicitly asking for archived results. We just want to treat these like normal collections.
      
      The change made to support this was to just default `archived` to the `archived` status of the collection itself. If you're asking for the items in an archived collection, you'll only get archived results. If you're asking for the items in a non-archived collection, you'll only get unarchived results.
      
      This is, for normal collections, the same thing as just returning all results. But see the section on namespaced collections for details on why we needed this slightly awkward default.
      
      No change - this endpoint still takes an `archived` parameter. When `archived=true`, we return the Trash collection, as it is in the root collection. Otherwise, we don't.
      
      * Make Trash Usable - UI (#41666)
      
      * Remove Archive Page + Add `/trash` routing (#42226)
      
      * removes archive page and related resources, adds new /trash route for trash collection, adds redirects to ensure consistent /trash routing instead of collection path
      
      * fixes unit + e2e tests, corrects links generated for trash collection to use /trash over /collect/:trashId route
      
      * updates comment
      
      * Serialize trash correctly (#42345)
      
      Also, create the Trash in a migration rather than on startup. Don't set a specific trash collection.id, instead just select based on the `type` when necessary.
      
      * Fix collection data for trashed items (#42284)
      
      * Fix collection IDs for trashed items
      
      When something is in the trash, we need to check permissions on the
      `trashed_from_collection_id` rather than the `collection_id`. We were
      doing this. However, we want the actual collection data on the search
      result to represent the actual collection it's in (the trash). I added
      the code to do this, a test to make sure it works as intended, and a
      comment to explain what we're doing here and why.
      
      * Refactor permission for trashed_from_collection_id
      
      Noah pointed out that the logic of "what collection do I check for
      permissions" was getting sprinkled into numerous places, and it felt a
      little scary. In fact, there was a bug in the previous implementation.
      If you selected a collection with `(t2/select [:model/Collection ...])`,
      selecting a subset of columns, and *didn't* include the
      `trashed_from_collection_id` in that set of columns, then called
      `mi/can-write?` on the result, you'd get erroneous results.
      Specifically, we'd return `nil` (representing the root collection).
      
      I think this is a reasonable fix, though I'm pretty new to using fancy
      multimethods with `derive` and such. But basically, I wanted a way to
      annotate a model to say "check these permissions using
      `:trashed_from_collection_id` if the item is archived." And in this
      case, we'll throw an exception if `:trashed_from_collection_id` is not
      present, just like we currently throw an exception if `:collection_id`
      is not present when checking permissions the normal way.
      
      * Move existing archived items to the trash (#42241)
      
      Move everything that's archived directly to the trash. It's not ideal
      because we're wiping out the existing collection structure, but the
      existing Archive page also has no collection structure.
      
      * lint fixes from rebase
      
      * Fix backend tests (#42506)
      
      `can_write` on collection items was wrong because we didn't have a
      `trashed_from_collection_id` - the refactor to ensure we didn't make
      that mistake worked! :tada:
      
      
      
      * Add an /api/collections/trash endpoint (#42476)
      
      This fetches the Trash in exactly the same way as if we'd fetched it
      with `/api/collection/:id` with the Trash ID. I hadn't realized that the
      frontend was doing this with the previously hardcoded Trash ID.
      
      * Make Trash Usable - Dynamic Trash Collection Id (#42532)
      
      * wip
      
      * fixes hardcoded reference to trash id in sidebar
      
      * remove TRAHS_COLLECTION
      
      * fixes line and e2e tests
      
      * fix invert logic mistake and fixes lint
      
      * Make Trash Usable - Search Filter UI (#42402)
      
      * adds filtering for archived items on the search page
      
      * fix typing mistake
      
      * Make Trash Usable - Bug Bash 1 (#42541)
      
      * disables reordering columns in archived questions, disables modifying archived question name in question header, collection picker no longer defaults to archived item, keeps trashed collection from appearing collection picker search results, shops showing empty area in trashed dashboard info sidebar, disables uploading csvs to trashed collections
      
      * impls pr feedback
      
      * fix e2e failure
      
      * Localize the name of the Trash (#42514)
      
      There are at least two spots where we can't just rely on the
      after-select hook, and select the collection name directly from the
      database: the Search and Collection API.
      
      In these cases we need to call `collection/maybe-localize-trash-name`,
      which will localize the name if the passed Collection is the Trash
      collection.
      
      * Update migration IDs
      
      Migration IDs need to be in order.
      
      * Fix failing mariadb:latest test (#42608)
      
      Hooooly cow. Glad to finally track this down. The issue was that
      booleans come back from MariaDB as NON BOOLEANS, and clojure says 0 is
      truthy, and together that makes for FUN TIMES.
      
      We need to turn MariaDB's bits to booleans before we can work with them.
      
      * Make Trash Usable: Add `can_restore` to API endpoints (#42654)
      
      Rather than having two separate implementations of the `can_restore`
      hydration for cards and dashboards, I set up automagic hydration for the
      `trashed_from_collection_id`. Then the hydration method for
      `can_restore` can just first hydrate `trashed_from_collection` and then
      operate based on that.
      
      * fix can_restore for collections trashed from root
      
      * Fix `trashed_from_location` for trashed subcols
      
      We were setting `trashed_from_location` on subcollections to the
      `trashed_from_location` of the ancestor that was trashed - which is
      wrong.
      
      This would have caused bugs where collections would be restored to the
      root collection, regardless of where they were originally trashed from.
      
      ---------
      
      Co-authored-by: default avatarSloan Sparger <sloansparger@gmail.com>
      
      * Fix Trash rollbacks (#42710)
      
      * Fix Trash rollbacks
      
      There were a few errors in my initial implementation.
      
      First, we can't simply assume that `trashed_from_location` and
      `trashed_from_collection_id` are set for anything in the archive. They
      should be set for things *directly* trashed, but not for things trashed
      as part of a collection.
      
      Therefore, we need to set `location` and `collection_id` to something
      like "if the item is in the trash, then the `trashed_from` - otherwise,
      don't modify it".
      
      Second, because `trashed_from_collection_id` is not a foreign key but
      `collection_id` is, we have a potential problem. If someone upgrades,
      Trashes a dashboard, then Trashes and permanently deletes the collection
      that dashboard _was_ in, then downgrades, how do we move the dashboard
      "back"? What do we set the dashboard's `collection_id` to?
      
      The solution here is that when we downgrade, we don't actually move
      dashboards, collections, or cards out of the Trash collection. Instead
      we just make Trash a normal collection and leave everything in it.
      
      * Make Trash Usable - Bug Bash 2 (#42787)
      
      * wip
      
      * fixes access to property on null value
      
      * pr clean up
      
      * more clean up
      
      * Fix up tests
      
      - permissions will check the archived from location. So recents need to
      select :report_card.trashed_from_collection_id and
      :dash.trashed_from_collection_id to satisfy mi/can-read?
      - some commented out code with `(def wtf (mt/user-http-request
      ...))`. Restore tests.
      - probably commented out because the recent views come back in an order
      but we don't care about the order. And it was asserting against an
      ordered collection. Just go from [{:id <id> :model <model>} ...] to a
      map of {<id> <model>} and then our comparison works fine and is not
      sensitive to order.
      
      * put try/finally around read-only-mode
      
      ensure the setting read-only-mode! to false happens in a finally
      block. Also, set read-only-mode to false in
      
      ```clojure
      (deftest read-only-login-test
        (try
          (cloud-migration/read-only-mode! true)
          (mt/client :post 200 "session" (mt/user->credentials :rasta))
          (finally
            (cloud-migration/read-only-mode! false))))
      ```
      
      But worryingly, I think we have a lurking problem that I'm not sure why
      we haven't hit yet. We run tests in parallel and then put all of the
      non-parallel tests on the same main thread. And when one of these puts
      the app in read-only-mode, the parallel tests will fail. HOPEFULLY since
      they are parallel they won't be hitting the app-db necessarily, but
      there could be lots of silly things that break.
      
      ---------
      
      Co-authored-by: default avatarJohn Swanson <john.swanson@metabase.com>
      Co-authored-by: default avatardan sutton <dan@dpsutton.com>
      Unverified
      7460cad8
    • Luiz Arakaki's avatar
      Update Metabase Analytics content for v50 (#42799) · dc4c5617
      Luiz Arakaki authored
      * update metabase analytics with cache dashboardtab and subscriptions and alerts dashboard
      
      * update yamls without view_count
      
      * fix metabase analytics namespace
      Unverified
      dc4c5617
  14. May 16, 2024
    • Filipe Silva's avatar
      Cloud Migration Feature (#41317) · b7253af4
      Filipe Silva authored
      
      * feat: cloud migration endpoints
      
      * fix: don't leave open conn behind when checking migrations
      
      `unrun-migrations` would open a new connection but not close it.
      Since `with-liquibase` is happy enough using a data-source the fix is straightforward.
      
      You can verify this by running the following code:
      
      ```
      (comment
        (require '[metabase.cmd.dump-to-h2 :as dump-to-h2]
                 '[metabase.analytics.prometheus :as prometheus])
      
        (defn dump []
          (-> (str "cloud_migration_dump_" (random-uuid) ".mv.db")
              io/file
              .getAbsolutePath
              dump-to-h2/dump-to-h2!))
      
        (defn busy-conns []
          (-> (prometheus/connection-pool-info)
              first
              second
              :numBusyConnections))
      
        ;; each dump leaves behind 1 busy conn
      
        (busy-conns)
        ;; => 0
      
        (dump)
        (busy-conns)
        ;; => 1
      
        (dump)
        (busy-conns)
        ;; => 2
      
        (dump)
        (busy-conns)
        ;; => 3
      
        )
      ```
      
      * fix: flush h2 before returning from dump
      
      * rfct: move code to models.cloud-migration
      
      * test: add login while on read-only test
      
      * fix: assorted cloud_migration fixes from review
      
      * test: allow overriding uploaded dump
      
      * fix: add UserParameterValue to read-only exceptions
      
      Also make the list a little bit nicer.
      
      * fix: only block dumped tables on read-only
      
      * fix: recover on startup if read-only was left on
      
      * feat: block migration when hosted already
      
      * test: test settings for migration
      
      * feat: cloud migration supports retries and multipart
      
      * test: sane dev defaults for migration
      
      * fix: upload 100% shouldn't be migration 100%
      
      * chore: make up a new migration id after merge
      
      * Cloud Migration FE (#42542)
      
      * it's a start
      
      * ui wip
      
      * wip
      
      * dynamic polling intervals, and custom modal for migrate confirmation modal
      
      * cleans out most of the remainig UI TODOs
      
      * adding progress component
      
      * impls team feedback
      
      * makes component more testable, starts some a unit test for the CloudPanel
      
      * finish unit testing
      
      * reverts api changes
      
      * update progress styling
      
      * fix type issues
      
      * fix e2e failure, fix feature unit tests by holding last migration state in fetchMock if more requests than expected happen at the end of a test, remove white spacing change in clj file
      
      * second pass at fixing tests
      
      * fix copy from ready-only to read-only
      
      * copy fix
      
      * Update frontend/src/metabase/admin/settings/components/CloudPanel/MigrationError.tsx
      
      Co-authored-by: default avatarRaphael Krut-Landau <raphael.kl@gmail.com>
      
      * Update frontend/src/metabase/admin/settings/components/CloudPanel/MigrationInProgress.tsx
      
      Co-authored-by: default avatarRaphael Krut-Landau <raphael.kl@gmail.com>
      
      * adding e2e test
      
      * pr feedback
      
      ---------
      
      Co-authored-by: default avatarNick Fitzpatrick <nickfitz.582@gmail.com>
      Co-authored-by: default avatarRaphael Krut-Landau <raphael.kl@gmail.com>
      
      ---------
      
      Co-authored-by: default avatarSloan Sparger <sloansparger@users.noreply.github.com>
      Co-authored-by: default avatarNick Fitzpatrick <nickfitz.582@gmail.com>
      Co-authored-by: default avatarRaphael Krut-Landau <raphael.kl@gmail.com>
      Unverified
      b7253af4
    • Ryan Laurie's avatar
      Basic Upsell System Setup + Hosting Upsell (#42785) · 43084103
      Ryan Laurie authored
      * Basic Upsell System Setup
      
      * Hosting Upsell
      
      * add upsell to updates page
      
      * update utms
      
      * update copy, utms and text
      Unverified
      43084103
    • adam-james's avatar
      Area Bar Combo Stacked Viz Settings Migration (#42740) · 672fa62e
      adam-james authored
      * Area Bar Combo Stacked Viz Settings Migration
      
      Per: https://www.notion.so/metabase/Migration-spec-e2732e79215a4a5fa44debb272413db9
      
      This addresses the necessary migrations for area/bar/combo stacked viz settings.
      
      * Custom migration test works
      
      * Check that non area/bar/combo charts also remove stack_display
      Unverified
      672fa62e
  15. May 15, 2024
  16. May 13, 2024
  17. May 09, 2024
  18. May 08, 2024
  19. May 07, 2024
  20. May 01, 2024
  21. Apr 30, 2024
    • adam-james's avatar
      Store Parameter Values Set by User on a per-user basis (#40415) · 2d29f05f
      adam-james authored
      * Store Parameter Values Set by User on a per-user basis
      
      This is a WIP for #28181 and the notion doc:
      
      https://www.notion.so/metabase/Tech-Maintain-user-level-state-in-dashboards-for-filters-fc16909a3216482f896934dd94b54f9a
      
      Still to do:
      
      - [ ] validate the table/model design
      - [ ] hook up the endpoints mentioned in the doc (2 in api/dashboard)
      - [ ] return the user specific parameter values on /api/dashboard/:dashboardID endpoints
      - [ ] write a few tests to capture the intent of this feature
      
      * Accidentally deleted a digit in the change ID timestamp
      
      * first pass at writing user param values to the db.
      
      It's in a doseq here which is probably not the correct way to go yet, but it's a step in the right direction.
      
      * Hydrate dashboard response with `:last_used_param_values` key
      
      If the user has previously set a parameter's value, it will show up in the map under that parameter id.
      
      If the user has no parameter value set, that entry will be 'null'.
      
      * Use proper fn name
      
      * Only save or retreive user params if there's a current user id
      
      * Add model to necessary lists
      
      * Only run query when there are parameter ids to run it with
      
      * Add a test to confirm that last_used_param_values works
      
      * Add models test namespace for CRUD test
      
      * The hydration is checked in the dashboard api test already
      Unverified
      2d29f05f
  22. Apr 29, 2024
  23. Apr 26, 2024
  24. Apr 25, 2024
  25. Apr 24, 2024
Loading