Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Jun 21, 2024
  2. Jun 20, 2024
  3. Jun 19, 2024
    • Case Nelson's avatar
      fix: populate param-fields for named fields on public dashboards (#44440) · 523ea248
      Case Nelson authored
      
      * Update e2e test
      
      * Update e2e test
      
      * fix: populate param-fields for named fields on public dashboards
      
      * Add name to public hydration
      
      * Add tests
      
      * Add tests
      
      * Fix tests
      
      ---------
      
      Co-authored-by: default avatarAlexander Polyankin <alexander.polyankin@metabase.com>
      Unverified
      523ea248
    • Cal Herries's avatar
      Exclude native query matches in search scoring when the search should exclude... · 3f788bba
      Cal Herries authored
      Exclude native query matches in search scoring when the search should exclude native queries (#43982)
      
      Co-authored-by: default avatarChris Truter <crisptrutski@users.noreply.github.com>
      Unverified
      3f788bba
    • adam-james's avatar
      User parameter value json fix (#44420) · bbde99dd
      adam-james authored
      
      * UserParameterValue transforms wrapped to properly escape string vals
      
      The `json-in` and `json-out` functions used for toucan model transforms do not perform any string escaping logic. This
      makes sense as we don't want to make assumptions about the shape of data flowing in/out of the db. But, this did mean
      that for the User paramter values table, string values were causing an error to be logged. This doesn't break
      anything, as the function will still correctly return the string, but it can clutter up logs.
      
      So, this PR wraps all incoming values in a map and unwraps it for outgoing values.
      
      * Add migration to wrap existing user param values with ::wrapper key
      
      Since the in/out transform for UserParameterValues is updated, we need to migrate any existing values to have this wrapping.
      
      * Simplify the solution to the problem.
      
      Since we're already getting what we need from the json-in/json-out *except* that it's logging a parse error, we can
      create a json-out that works the same way (tries to parse and returns the string as-is if it fails) without logging an
      error at all.
      
      * take away the arg and keywordize inside the json-out fn
      
      * remove irrelevant comment
      
      Signed-off-by: default avatarAdam James <adam.vermeer2@gmail.com>
      
      * Add a little more test coverage for different value types
      
      * faster test thanks to Dan!
      
      * Add a comment about the data being tested
      
      ---------
      
      Signed-off-by: default avatarAdam James <adam.vermeer2@gmail.com>
      Unverified
      bbde99dd
    • adam-james's avatar
      XLSX Pivot Table Downloads have 'Native' Pivot Table (#43791) · 89bdaa2f
      adam-james authored
      * XLSX Pivot Table Downloads have 'Native' Pivot Table
      
      Addresses: #2473
      
      The xlsx downloads work fine for regular tables. However, on Pivot Table downloads, the resulting file isn't pivoted
      and is formatted somewhat strangely (it contains additional rows related to totals, has an extra 'pivot-grouping'
      column).
      
      Now, the downloaded xlsx file contains 2 sheets:
      - the data sheet containing the question's rows unpivoted. If you were constructing a pivot table yourself, you would
      probably start with this shape of data.
      - the pivot sheet containing a 'native' Pivot table whose reference points to the data on the data sheet, and whose
      columns, rows, and values align with the cols, rows, and measures set up in Metabase
      
      * Silly typo!
      
      * Exported Pivot uses correct aggregation (sum, count, stddev, avg, min, max)
      
      * data for pivot exports now also uses the formatting
      
      * Add a test asserting that a pivot table exists in the xlsx file
      
      * add-row! is a method dispatching on sheet class
      
      * remove unnecessary hint
      
      * address review feedback:
      
       - consolidate the aggregation function key fns into one function in the .xlsx namespace
         this was moved out of postprocessing so that the xlsx specific fns don't litter the post processing namespace at
         this time
      
       - clean up an unnecessary destructiring inside the native-pivot function
       - don't shadow 'name' inside the body of a fn
      
      * Add tests that confirm zero-col and zero-row situations work
      
      * Consolidate the atoms used to store pivot rows/options
      Unverified
      89bdaa2f
    • Chris Truter's avatar
    • Braden Shepherdson's avatar
      [QP] Fix breakouts on a nested model (#44418) · 3ccf4d41
      Braden Shepherdson authored
      Fixes #43993 for real.
      
      The earlier fix #44182 was needlessly restricting the
      "nominal refs" check to fields which were using numeric IDs,
      when really any match on nominal refs is valid.
      Unverified
      3ccf4d41
    • Braden Shepherdson's avatar
    • Alexander Solovyov's avatar
      [openapi] fix query parameter detection and add default/description handling (#44239) · f284a522
      Alexander Solovyov authored
      Also OpenAPI generation now handles parameter renaming, like in `{c :count}` destructuring.
      Unverified
      f284a522
    • Ngoc Khuat's avatar
      Metabase channel (#43924) · 613318f7
      Ngoc Khuat authored
      Unverified
      613318f7
  4. Jun 18, 2024
    • adam-james's avatar
      Pivot Tables with no Pivot Columns should still download (#44329) · 37abc084
      adam-james authored
      * Pivot Tables with no Pivot Columns should still download
      
      Fixes #44159
      
      Pivot Tables can still be valid without columns, and therefore the download should respect such a query and be
      successfully exported.
      
      In this PR, the postprocessing of pivot results into 'visual pivot' exports for csv takes into account the zero-column
      configuration and the download no longer fails.
      
      * add a test to confirm that zero cols and >1 measures works as well
      
      * Handle the case where there are zero pivot-rows.
      
      This is a similar problem to the zero pivot-cols case, just with rows.
      
      The app actually doesn't render these tables correctly, but that's a separate frontend bug
      Unverified
      37abc084
    • John Swanson's avatar
      Fix missing `can_restore` on collections (#44287) · d6873e86
      John Swanson authored
      I'd fixed the `can_restore` on non-collections to always be present, but
      forgot about fixing it for collections as well.
      Unverified
      d6873e86
    • Nemanja Glumac's avatar
      Fix metabase lib returning the string "null" as the drill through value (#44234) · bc78552d
      Nemanja Glumac authored
      
      * Convert drill-through (CLJ) values to JS
      
      * Convert the drill-through `:null` to an actual `null`
      
      Resolves #44232
      
      * Use new function in `filter-drill-details`
      
      * display-info->js should check for nil before seqable?
      
      * fix js object comparison
      
      * Add test for the `drill-value->js` function
      
      * Simplify the test
      
      * Fix the JS object equality in the test
      
      ---------
      
      Co-authored-by: default avatarAlexander Solovyov <alexander@solovyov.net>
      Unverified
      bc78552d
  5. Jun 17, 2024
    • Alexander Polyankin's avatar
    • Case Nelson's avatar
      Change run to make-run in process-query-for-card (#44244) · ea9dfd1d
      Case Nelson authored
      * Change run to make-run in process-query-for-card
      
      Fixes #44160
      
      There were two levers where a card could change how it was run.
      
      `qp` this is basically `qp/process-query` with some optional userland
      flags and constraints marked on the passed in query.
      In the case of pivot tables in the
      `/pivot/:card-id/query` endpoints it was `qp.pivot/process-query`
      
      'run' this defaults to a streaming response that takes the `qp` param
      and `export-format` and passes a query to qp. Most often this is
      overriden to not stream the response.
      
      Unfortunately, if `run` is overwritten, then `qp` is thrown away, and
      the caller who overrides `run` must determine the correct `qp` to use.
      
      However, when running `process-query-for-dashcard` it is difficult to properly set
      `qp` for all of the dashcards without doing card lookups. So [this commit](https://github.com/metabase/metabase/blob/release-x.50.x/src/metabase/query_processor/card.clj#L230-L232) did a check to swap the runner in the presence of a pivot query that calls the **default** runner with a `qp.pivot/run-pivot-query`. The problem is that pulse needs to use a non-default runner, but there's no way for `process-query-for-card` to do both.
      
      This commit changes `run` to `make-run` which raises the abstraction to
      a function that produces a runer given a `qp`. This allows
      `process-query-for-card` to change the `qp` as needed while keeping the
      passed in runner.
      
      There's really three things going on that could be teased apart
      further/better.
      
      1. `qp` - this conflates the processor (`qp/process-query` `qp.pivot/run-pivot-query`)
        and modifying query (`qp/userland-query`,
        `qp/userland-query-with-default-constraints` `(update query :info merge info)`)
      2. `make-run` - How the above are processed: (streaming/realized)
      
      * Fix tests
      Unverified
      ea9dfd1d
  6. Jun 14, 2024
    • John Swanson's avatar
      Fix collection id on collection items endpoint (#44236) · 27e1ff64
      John Swanson authored
      * Fix a potential bug in `hydrate-can-restore`
      
      I had a bug here. I wanted to hydrate `:collection` on the items and
      then return the original items, but instead I was `dissoc`ing
      `:collection` from the items. If the original items passed in already
      had `:collection` keys this would break them.
      
      AFAIK this didn't cause any issues, but it could have - so fixing it here.
      
      * Add a test
      
      * Present collection items in the collection
      
      When fetching collection items, we need to fetch the true
      `collection_id` in order to figure out things like `can_restore` and
      `can_delete`, but set it to the displayed/effective `collection_id` when
      we actually send it out.
      
      This fixes a bug where collections had their `collection_id` match their
      `id` when returned by this endpoint.
      Unverified
      27e1ff64
    • John Swanson's avatar
      Always provide `can_restore` (#44124) · e677eb7a
      John Swanson authored
      Previously we were just adding this when the item was archived. Instead,
      always send a value - if the item is not archived, it can't be restored,
      so it will be `false` in this case.
      
      Making this change will make fixing a frontend bug easier, because the
      frontend will be able to just trust the `can_restore` value coming in
      from the backend.
      Unverified
      e677eb7a
  7. Jun 13, 2024
  8. Jun 12, 2024
  9. Jun 11, 2024
    • Case Nelson's avatar
      [Metrics V2] Metrics with filters on implicitly joined columns produce query... · febfe4a5
      Case Nelson authored
      [Metrics V2] Metrics with filters on implicitly joined columns produce query error when added in a Summarize block (#43987)
      
      * [Metrics V2] Metrics with filters on implicitly joined columns produce query error when added in a Summarize block
      
      * Handle multiple metrics with same implicit join
      
      * Address PR feedback
      Unverified
      febfe4a5
    • bryan's avatar
      Make recents understand context (#43478) · 7b849da3
      bryan authored
      
      * adds the endpoint and a test
      
      * add recents endpoint, todo: tests
      
      * add post recents endpoint
      
      * return recent views for both lists, but make the endpoint work
      
      * Make recent-views context aware
      
      - pruning is context aware, only checks for the recently-inserted
        context
      - Adds endpoints:
          - POST to create selection recents,
          - GET activity/recents
            - requres context query param to be one of:
            - all, views, selections
      
      - Adds context arg to update-users-recent-views!
      - Cleans up arg schema for update-users-recent-views
      
      * impl GET api/activity/recents
      
      - return recent-selections and recent-views from
      - send context == collection from pinned card reads
      
      * update callsites of recent-views/update-users-recent-views!
      
      - to use :view param where necessary
      
      * fixes models/recent-view tests
      
      * adds more activity/recent-view tests
      
      * wip
      
      - fix whitespace linter
      
      * Fix command palette e2e test
      
      - reuse util snake-keys
      
      * updates fe to use new recents endpoints
      
      * fixes fe type issue
      
      * snake-keys -> deep-snake-keys
      
      - I've been betrayed by lsp rename
      
      * snake-keys -> deep-snake-keys
      
      - I've been betrayed by lsp rename
      
      * log selection events for created collections
      
      * mysql doesn't allow default values for TEXT types
      
      * log a recent view on data-picker selection
      
      * decouple view-log context from card-event view context
      
      * fix a doc typo
      
      * stop double logging recent-views on POST
      
      * maybe fixes some tests
      
      * some e2e fixes
      
      * fix mysterious divide by zero during score search
      
      * fix divide by zero possibilities everywhere in score-items
      
      metabase.api.activity/score-items used to throw when there weren't any
      items being scored (even though there's a `(when (seq items) ...)` check)
      
      * more test fixes
      
      * fix more e2e tests, + rename endpoint in tests
      
      * fix oopsie
      
      * fixes a few more tests
      
      * address review comments/questions
      
      * allow for a comma delimited list in qps like ?context=views,selections
      
      returns all recent view items with those contexts, most recent first
      under the `:recents` key.
      
      * refactors FE around new endpoint
      
      * fixes for unit tests
      
      * use ms/QueryVectorOf for context
      
      * fix models/recent_views tests
      
      * use multiple query params instead of comma delimited value on FE
      
      * ignore timestamp when deduping recents
      
      * review comments + test fixing
      
      * update docstring
      
      * fix api/activity_test s
      
      * actually dedupe
      
      * add test for deduping by context
      
      * e2e fix: shows up-to-date list of recently viewed items after another page is visited
      
      * e2e fix: should undo the question replace action
      
      * e2e fix: should replace a dashboard card question (metabase#36984)
      
      * e2e fix: should preselect the most recently visited dashboard
      
      * fix 6 more e2e tests
      
      * fixes fe type check and unit failure
      
      * renames unit test mocking function
      
      * fix a flaky e2e test
      
      * widen Item to accept str or kw where sensible
      
      - allow strings or keywords for moderated_status, and authority_level
      
      * simplify impl + add test
      
      * add view-log to events schema
      
      * add collection context to view log
      
      * fix the final 2 failing e2e tests
      
      * click dashboard tab when the user has can-read? on recent entities
      
      ---------
      
      Co-authored-by: default avatarSloan Sparger <sloansparger@gmail.com>
      Unverified
      7b849da3
    • John Swanson's avatar
      Improve the Trash data model (#42845) · 7374e61e
      John Swanson authored
      
      * Improve the Trash
      
      Ok, so I had a realization at the PERFECT time, immediately after the RC
      cutoff. Great job, brain!
      
      Here's the realization. For the Trash, we need to keep track of two
      things:
      
      - where the item actually is located in the hierarchy, and
      
      - what collection we should look at to see what permissions apply to the
      item.
      
      For example, a Card might be in the Trash, but we need to look at
      Collection 1234 to see that a user has permission to Write that card.
      
      My implementation of this was to add a column,
      `trashed_from_collection_id`, so that we could move a Card or a
      Dashboard to a new `collection_id`, but keep track of the permissions we
      actually needed to check.
      
      So:
      
      - `collection_id` was where the item was located in the collection
      hierarchy, and
      
      - `trashed_from_collection_id` was where we needed to look to check
      permissions.
      
      Today I had the realization that it's much, much more important to get
      PERMISSIONS right than to get collection hierarchy right. Like if we
      mess up and show something as in the Trash when it's not in the Trash,
      or show something in the wrong Collection - that's not great, sure. But
      if we mess up and show a Card when we shouldn't, or show a Dashboard
      when we shouldn't, that's Super Duper Bad.
      
      So the problem with my initial implementation was that we needed to
      change everywhere that checked permissions, to make sure they checked
      BOTH `trashed_from_collection_id` and `collection_id` as appropriate.
      
      So... there's a much better solution. Instead of adding a column to
      represent the *permissions* that we should apply to the dashboard or
      card, add a column to represent the *location in the hierarchy* that
      should apply to the dashboard or the card.
      
      We can simplify further: the *only time* we want to display something in
      a different place in the hierarchy than usual is when it was put
      directly into the trash. If you trash a dashboard as a part of a
      collection, then we should display it in that collection just like
      normal.
      
      So, we can do the following:
      
      - add a `trashed_directly` column to Cards and Dashboards, representing
      whether they should be displayed in the Trash instead of their actual
      parent collection
      
      - use the `collection_id` column of Cards and Dashboards without
      modification to represent permissions.
      
      There's one main downside of this approach. If you trash a dashboard,
      and then delete the collection that the dashboard was originally in,
      what do we do with `dashboard.collection_id`?
      
      - we have to change it, because it's a foreign key
      
      - we can't set it to null, because that represents the root collection
      
      In this initial implementation, I've just cascaded the delete: if you
      delete a dashboard and then delete a collection, the dashboard will be
      deleted. This is not ideal. I'm not totally sure what we should do in
      this situation.
      
      * Rip out all the `trashed_from_collection_id`
      
      * Migration to delete trashed_from_collection_id
      
      * fixes
      
      * don't move collections
      
      And don't allow deleting collections
      
      * only show cards/dashboards with write perms
      
      * Show the correct archived/unarchived branch
      
      * some cleanup
      
      * add a todo for tomorrow
      
      * Fix for yesterday's TODO
      
      * more wip
      
      * refactor
      
      * memoize collection info
      
      * move around memoization a bit
      
      * fix schema migration test
      
      * oops, delete server.middleware.memo
      
      * Use a migration script (postgres only for now)
      
      * Fix some tests
      
      * remove n+1 queries in `collection-can-restore`
      
      * fix test
      
      * fix more tests, and x-db migration script
      
      * fix h2 rollback
      
      * fix mysql/mariadb migration
      
      * lint
      
      * fix some mariadb/mysql tests
      
      * fix h2 rollback
      
      * Fix mysql rollback
      
      * Fix Postgres migration
      
      * "Real" `trash_operation_id` UUIDs from migration
      
      * fix mariadb migration
      
      * Separate MySQL/MariaDB migrations
      
      * trashed directly bit->boolean
      
      * Remove `trashed_from_*` from migrations
      
      * Rename `api/updates-with-trashed-directly`
      
      Previously named `move-on-archive-or-unarchive`, which was no longer
      accurate since we're not moving anything!
      
      * Add `can_delete`
      
      * Delete test of deleted code
      
      * Can't move anything to the Trash
      
      The Trash exists as a real collection so that we can list items in and
      such without needing special cases. But we don't want anything to
      _actually_ get moved to it.
      
      * integrates can_delete flag on FE
      
      * Update src/metabase/models/collection.clj
      
      Co-authored-by: default avatarNoah Moss <32746338+noahmoss@users.noreply.github.com>
      
      * Update src/metabase/search/impl.clj
      
      Co-authored-by: default avatarNoah Moss <32746338+noahmoss@users.noreply.github.com>
      
      * Better name for `fix-collection-id`
      
      Also consolidated it into a single function. It takes the
      trash-collection-id as an argument to prevent a circular dependency.
      
      * s/trashed/archived/ on the backend
      
      In the product, we want to move to "trashed" as the descriptor for
      things that are currently-known-as-archived.
      
      It'd be nice to switch to that on the backend as well, but that'll
      require quite a lot of churn on the frontend as we'd need to change the
      API (or have an awkward translation layer, where we call it `trashed` on
      the backend, turn it into `archived` when sending data out via the API,
      and then the FE turns around and called it "trashed" again).
      
      For now, let's just be consistent on the backend and call it `archived`
      everywhere. So:
      
      - `archived_directly` instead of `trashed_directly`, and
      - `archive_operation_id` instead of `trash_operation_id`
      
      * Fix up a couple docstrings
      
      * Rename `visible-collection-ids` args
      
      for `collection/permissions-set->visible-collection-ids`:
      
      - `:include-archived` => `:include-archived-items`
      
      - `:include-trash?` => `:include-trash-collection`
      
      * select affected-collection-ids in the tx
      
      * Stop dealing with `:all` visible collection ids
      
      This used to be a possible return value for
      `permissions-set->visible-collection-ids` indicating that the user could
      view all collections. Now that function always returns a set of
      collection IDs (possibly including "root" for the root collection) so we
      don't need to deal with that anymore.
      
      * Don't use separate hydration keys for collections
      
      Dispatch on the model of each item (but still keep them in batches for
      efficiency.)
      
      * vars for collectable/archived-directly models
      
      * Round up loose `trashed_directly`s
      
      * Fix a test
      
      * Use `clojure.core.memoize` instead of handrolled
      
      It's slightly different (TTL instead of exactly per-request) but that
      should be fine.
      
      * FE lint fixes
      
      * :sob:
      
       e2e test fix
      
      ---------
      
      Co-authored-by: default avatarSloan Sparger <sloansparger@gmail.com>
      Co-authored-by: default avatarNoah Moss <32746338+noahmoss@users.noreply.github.com>
      Unverified
      7374e61e
  10. Jun 10, 2024
  11. Jun 08, 2024
  12. Jun 07, 2024
Loading