Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Aug 22, 2022
  2. Aug 12, 2022
    • Cam Saul's avatar
      Enable Kondo for tests (part 1) (#24736) · bc4acbd2
      Cam Saul authored
      * Fix some small things
      
      * Add Kondo to deps.edn to be able to debug custom hooks from REPL
      
      * Fix macroexpansion hook for with-temp* without values
      
      * Test config (WIP)
      
      * More misc fixes
      
      * Disable :inline-def for tests
      
      * More misc fixes
      
      * Fix $ids and mbql-query kondo hooks.
      
      * Fix with-temporary-setting-values with namespaced symbols
      
      * More misc fixes
      
      * Fix the rest of the easy ones
      
      * Fix hook for mt/dataset
      
      * Horrible hack to work around https://github.com/clj-kondo/clj-kondo/issues/1773 . Custom linter for mbql-query macro
      
      * Fix places calling mbql-query with a keyword table name
      
      * Fix the last few errors in test/
      
      * Fix errors in enterprise/test and shared/test
      
      * Fix driver test errors
      
      * Enable linters on CI
      
      * Enable unresolved-namespace linter for tests
      
      * Appease the namespace linter again
      
      * Test fixes
      Unverified
      bc4acbd2
  3. May 31, 2022
    • dpsutton's avatar
      Fix deadlock in pivot table connection management (#22981) · a15fc4ea
      dpsutton authored
      Addresses part of https://github.com/metabase/metabase/issues/8679
      
      Pivot tables can have subqueries that run to create tallies. We do not
      hold the entirety of resultsets in memory so we have a bit of an
      inversion of control flow: connections are opened, queries run, and
      result sets are transduced and then the connection closed.
      
      The error here was that subsequent queries for the pivot were run while
      the first connection is still held open. But the connection is no longer
      needed. But enough pivots running at the same time in a dashboard can
      create a deadlock where the subqueries need a new connection, but the
      main queries cannot be released until the subqueries have completed.
      
      Also, rf management is critical. It's completion arity must be called
      once and only once. We also have middleware that need to be
      composed (format, etc) and others that can only be composed
      once (limit). We have to save the original reducing function before
      composition (this is the one that can write to the download writer, etc)
      but compose it each time we use it with `(rff metadata)` so we have the
      format and other middleware. Keeping this distinction in mind will save
      you lots of time. (The limit query will ignore all subsequent rows if
      you just grab the output of `(rff metadata)` and not the rf returned
      from the `:rff` key on the context.
      
      But this takes the following connection management:
      
      ```
      tap> "OPENING CONNECTION 0"
      tap> "already open: "
        tap> "OPENING CONNECTION 1"
        tap> "already open: 0"
        tap> "CLOSING CONNECTION 1"
        tap> "OPENING CONNECTION 2"
        tap> "already open: 0"
        tap> "CLOSING CONNECTION 2"
        tap> "OPENING CONNECTION 3"
        tap> "already open: 0"
        tap> "CLOSING CONNECTION 3"
      tap> "CLOSING CONNECTION 0"
      ```
      
      and properly sequences it so that connection 0 is closed before opening
      connection 1.
      
      It hijacks the executef to just pass that function into the reducef part
      so we can reduce multiple times and therefore control the
      connections. Otherwise the reducef happens "inside" of the executef at
      which point the connection is closed.
      
      Care is taken to ensure that:
      - the init is only called once (subsequent queries have the init of the
      rf overridden to just return `init` (the acc passed in) rather than
      `(rf)`
      - the completion arity is only called once (use of `(completing rf)` and
      the reducing function in the subsequent queries is just `([acc] acc)`
      and does not call `(rf acc)`. Remember this is just on the lower
      reducing function and all of the takes, formats, etc _above_ it will
      have the completion arity called because we are using transduce. The
      completion arity is what takes the volatile rows and row counts and
      actually nests them in the `{:data {:rows []}` structure. Without
      calling that once (and ONLY once) you end up with no actual
      results. they are just in memory.
      Unverified
      a15fc4ea
  4. May 02, 2022
    • dpsutton's avatar
      Persisted models schema (#21109) · c504a12e
      dpsutton authored
      * dir locals for api/let-404
      
      * Driver supports persisted model
      
      * PersistedInfo model
      
      far easier to develop this model with the following sql:
      
      ```sql
      create table persisted_info (
         id serial primary key not null
        ,db_id int references metabase_database(id) not null
        ,card_id int references report_card(id) not null
        ,question_slug text not null
        ,query_hash text not null
        ,table_name text not null
        ,active bool not null
        ,state text not null
        ,UNIQUE (db_id, card_id)
      )
      
      ```
      and i'll make the migration later. Way easier to just dorp table, \i
      persist.sql and keep developing without worrying about the migration
      having changed so it can't rollback, SHAs, etc
      
      * Persisting api (not making/deleting tables yet)
      
      http POST "localhost:3000/api/card/4075/persist" Cookie:$COOKIE -pb
      http DELETE "localhost:3000/api/card/4075/persist" Cookie:$COOKIE -pb
      
      useful from commandline (this is httpie)
      
      * Pull format-name into ddl.i
      
      * Postgres ddl
      
      * Hook up endpoints
      
      * move schema-name into interface
      
      * better jdbc connection management
      
      * Hotswap peristed tables into qp
      
      * clj-kondo fixes
      
      * docstrings
      
      * bad alias in test infra
      
      * goodbye testing format-name function
      
      left over. everything uses ddl.i/format-name and this rump was left
      
      * keep columns in persisted info
      
      columns that are in the persisted query. I thought about a tuple of
      [col-name type] instead of just the col-name. I didn't do this this type
      because I want to ensure that we compute the db-type in ONLY ONE WAY
      ever and i wasn't ready to commit to that yet. I'm not sure this is
      necessary in the future so it remains out now.
      
      Context: we hot-swap the persisted table in for the original
      query. Match up on query hash remaining the same. It continues to use
      the metadata from the original query and just `select cols from table`
      
      * Add migration for persisted_info table
      
      also removes the db_id. Don't know why i was thinking that was
      necessary. also means we don't need another unique constraint on (db_id,
      card_id) since we can just mark the card_id as unique. no idea what i
      was thinking.
      
      * fix ns in a sad manner :(
      
      far better to just have no alias to indicate it is required for side
      effects.
      
      * Dont hardcode a card-id :(:(:( my B
      
      * copy the PersistedInfo
      
      * ns cleanup, wrong alias, reflection warning
      
      * Check that state of persisted_info is persisted
      
      * api to enable persistence on a db
      
      i'm not wild about POST /api/database/:id/persist and POST
      /api/database/:id/unpersist but carrying on. left a note about it.
      
      So now you can enable persistence on a db, enable persistence on a model
      by posting to api/card/:id/persist and everything works.
      
      What does not work yet is the unpersisting or re-persisting of models
      when using the db toggle.
      
      * Add refresh_begin and refresh_end to persisted_info
      
      This information helps us with two bits:
      - when we need to chunk refreshing models, this lets us order by
      staleness so we can refresh a few models and pick up later
      - if we desire, we can look at the previous elapsed time of refreshes
      and try to gauge amount of work we want. This gives us a bit of
      look-ahead. We can of course track our progress as we go but there's no
      way to know if the next refresh might take an hour. This gives us a bit
      of insight.
      
      * Refresh tables every 8 hours ("0 0 0/8 * * ? *")
      
      Tables are refreshed every 8 hours. There is one single job doing this
      named "metabase.task.PersistenceRefresh.job" but it has 0 triggers by
      default. Each database with persisted models will add a trigger to this
      to refresh those models every 8 hours.
      
      When you unpersist a model, it will immediately remove the table and
      then delete the persisted_info record.
      
      When you mark a database as persist false, it will immediately mark all
      persisted_info rows as inactive and deleteable, and unschedule its
      trigger. A background thread will then start removing the tables.
      
      * Schedule refreshing on startup, watching for already scheduled
      
      does not allow for schedule changes but that's a future endeavor
      
      * appease our linter overlords
      
      * Dynamic var to inbhit persistence when refreshing
      
      also, it checked the state against "active" instead of "persisted" which
      is really freaky. how has this worked in the past if thats the case?
      
      * api docstrings on card persist
      
      * docstring
      
      * Don't sync the persisted schemas
      
      * Fix bad sql when no deleteable rows
      
      getting error with bad sql when there were no ids
      
      * TaskHistory for refreshing
      
      * Add created_at to persist_info table
      
      helpful if this ever ends up in the audit section
      
      * works on redshift
      
      hooked up the hierarchy and redshift is close enought that it just works
      
      * Remove persist_info record after deleting "deleteable"
      
      * Better way to check that something exists
      
      * POST /api/<card-id>/refresh
      
      api to refresh a model's persisted record
      
      * return a 204 from refreshing
      
      * Add buttons to persist/unpersist a database and a model for PoC (#21344)
      
      * Redshift and postgres report true for persist-models
      
      there are separate notions of persistence is possible vs persistence is
      enabled. Seems like we're just gonna check details for enabled and rely
      on the driver multimethod for whether it is possible.
      
      * feature for enabled, hydrate card with persisted
      
      two features: :persist-models for which dbs support it, and
      :persist-models-enabled for when that option is enabled.
      
      POST to api/<card-id>/unpersist
      
      hydrate persisted on cards so FE can display persist/unpersist for
      models
      
      * adjust migration number
      
      * remove deferred-tru :shrug:
      
      
      
      * conditionally hydrate persisted on models only
      
      * Look in right spot for persist-models-enabled
      
      * Move persist enabled into options not details
      
      changing details recomposes the pool, which is especially bad now that
      we have refresh tasks going on reusing the same connection
      
      * outdated comment
      
      * Clean up source queries from persisted models
      
      their metadata might have had [:field 19 nil] field_refs and we should
      substitute just [:field "the-name" {:base-type :type/Whatever-type}
      since it will be a select from a native query.
      
      Otherwise you get the following:
      
      ```
      2022-03-31 15:52:11,579 INFO api.dataset :: Source query for this query is Card 4,088
      2022-03-31 15:52:11,595 WARN middleware.fix-bad-references :: Bad :field clause [:field 4070 nil] for field "category.catid" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,596 WARN middleware.fix-bad-references :: Bad :field clause [:field 4068 nil] for field "category.catgroup" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,596 WARN middleware.fix-bad-references :: Bad :field clause [:field 4071 nil] for field "category.catname" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,596 WARN middleware.fix-bad-references :: Bad :field clause [:field 4069 nil] for field "category.catdesc" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4070 nil] for field "category.catid" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4068 nil] for field "category.catgroup" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4071 nil] for field "category.catname" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4069 nil] for field "category.catdesc" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,622 WARN middleware.fix-bad-references :: Bad :field clause [:field 4070 nil] for field "category.catid" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,622 WARN middleware.fix-bad-references :: Bad :field clause [:field 4068 nil] for field "category.catgroup" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,622 WARN middleware.fix-bad-references :: Bad :field clause [:field 4071 nil] for field "category.catname" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,623 WARN middleware.fix-bad-references :: Bad :field clause [:field 4069 nil] for field "category.catdesc" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      ```
      I think its complaining that that table is not joined in the query and
      giving up.
      
      While doing this i see we are hitting the database a lot:
      
      ```
      2022-03-31 22:52:18,838 INFO api.dataset :: Source query for this query is Card 4,111
      2022-03-31 22:52:18,887 INFO middleware.fetch-source-query :: Substituting cached query for card 4,111 from metabase_cache_1e483_229.model_4111_redshift_c
      2022-03-31 22:52:18,918 INFO middleware.fetch-source-query :: Substituting cached query for card 4,111 from metabase_cache_1e483_229.model_4111_redshift_c
      2022-03-31 22:52:18,930 INFO middleware.fetch-source-query :: Substituting cached query for card 4,111 from metabase_cache_1e483_229.model_4111_redshift_c
      ```
      
      I tried to track down why we are doing this so much but couldn't get
      there.
      
      I think I need to ensure that we are using the query store annoyingly :(
      
      * Handle native queries
      
      didn't nest the vector in the `or` clause correctly. that was truthy
      only when the mbql-query local was truthy. Can't put the vector `[false
      mbql-query]` there and rely on that behavior
      
      * handle datetimetz in persisting
      
      * Errors saved into persisted_info
      
      * Reorder migrations to put v43.00-047 before 048
      
      * correct arity mismatch in tests
      
      * comment in refresh task
      
      * GET localhost:3000/api/persist
      
      Returns persisting information:
      - most information from the `persist_info` table. Excludes a few
      columns (query_hash, question_slug, created_at)
      - adds database name and card name
      - adds next fire time from quartz scheduling
      
      ```shell
      ❯ http GET "localhost:3000/api/persist" Cookie:$COOKIE -pb
      [
          {
              "active": false,
              "card_name": "hooking reviews to events",
              "columns": [
                  "issue__number",
                  "actor__login",
                  "user__login",
                  "submitted_at",
                  "state"
              ],
              "database_id": 19,
              "database_name": "pg-testing",
              "error": "No method in multimethod 'field-base-type->sql-type' for dispatch value: [:postgres :type/DateTimeWithLocalTZ]",
              "id": 4,
              "next-fire-time": "2022-04-06T08:00:00.000Z",
              "refresh_begin": "2022-04-05T20:16:54.654283Z",
              "refresh_end": "2022-04-05T20:16:54.687377Z",
              "schema_name": "metabase_cache_1e483_19",
              "state": "error",
              "table_name": "model_4077_hooking_re"
          },
          {
              "active": true,
              "card_name": "redshift Categories",
              "columns": [
                  "catid",
                  "catgroup",
                  "catname",
                  "catdesc"
              ],
              "database_id": 229,
              "database_name": "redshift",
              "error": null,
              "id": 3,
              "next-fire-time": "2022-04-06T08:00:00.000Z",
              "refresh_begin": "2022-04-06T00:00:01.242505Z",
              "refresh_end": "2022-04-06T00:00:01.825512Z",
              "schema_name": "metabase_cache_1e483_229",
              "state": "persisted",
              "table_name": "model_4088_redshift_c"
          }
      ]
      
      ```
      
      * include card_id in /api/persist
      
      * drop table if exists
      
      * Handle rescheduling refresh intervals
      
      There is a single global value for the refresh interval. The API
      requires it to be 1<=value<=23. There is no validation if someone
      changes the value in the db or with an env variable. Setting this to a
      nonsensical value could cause enormous load on the db so they shouldn't
      do that.
      
      On startup, unschedule all tasks and then reschedule them to make sure
      that they have the latest value.
      
      One thing to note: there is a single global value but i'm making a task
      for each database. Seems like an obvious future enhancement so I don't
      want to deal with migrations. Figure this gives us the current spec
      behavior to have a trigger for each db with the same value and lets us
      get more interesting using the `:options` on the database in the
      future.
      
      * Mark as admin not internal
      
      lets it show up in `api/setting/` . I'm torn on how special this value
      is. Is it the setting code's requirement to invoke the reschedule
      refresh triggers or should that be on the setting itself.
      
      It feels "special" and can do a lot of work from such just setting an
      integer. There's a special endpoint to set it which is aware, and thus
      would be a bit of an error to set this setting through the more
      traditional setting endpoint
      
      * Allow for "once a day" refresh interval
      
      * Global setting to enable/disable
      
      post api/persist/enable
      post api/persist/disable
      
      enable allows for other scheduling operations (enabling on a db, and
      then on a model).
      
      Disable will
      - update each enabled database and disable in options
      - update each persisted_info record and set it inactive and state
      deleteable
      - unschedule triggers to refresh
      - schedule task to unpersist each model (deleting table and associated
      pesisted_info row)
      
      * offset and limits on persisted info list
      
      ```shell
      http get "localhost:3000/api/persist?limit=1&offset=1" Cookie:$COOKIE -pb
      {
          "data": [
              {
                  "active": true,
                  "card_id": 4114,
                  "card_name": "Categories from redshift",
                  "columns": [
                      "catid",
                      "catgroup",
                      "catname",
                      "catdesc"
                  ],
                  "database_id": 229,
                  "database_name": "redshift",
                  "error": null,
                  "id": 12,
                  "next-fire-time": "2022-04-08T00:00:00.000Z",
                  "refresh_begin": "2022-04-07T22:12:49.209997Z",
                  "refresh_end": "2022-04-07T22:12:49.720232Z",
                  "schema_name": "metabase_cache_1e483_229",
                  "state": "persisted",
                  "table_name": "model_4114_categories"
              }
          ],
          "limit": 1,
          "offset": 1,
          "total": 2
      }
      ```
      
      * Include collection id, name, and authority level
      
      * Include creator on persisted-info records
      
      * Add settings to manage model persistence globally (#21546)
      
      * Common machinery for running steps
      
      * Add model cache refreshes monitoring page (#21551)
      
      * don't do shenanigans
      
      * Refresh persisted and error persisted_info rows
      
      * Remarks on migration column
      
      * Lint nits (sorted-ns and docstrings)
      
      * Clean up unused function, docstring
      
      * Use `onChanged` prop to call extra endpoints (#21593)
      
      * Tests for persist-refresh
      
      * Reorder requires
      
      * Use quartz for individual refreshing for safety
      
      switch to using one-off jobs to refresh individual tables. Required
      adding some job context so we know which type to run.
      
      Also, cleaned up the interface between ddl.interface and the
      implementations. The common behaviors of advancing persisted-info state,
      setting active, duration, etc are in a public `persist!` function which
      then calls to the multimethod `persist!*` function for just the
      individual action on the cached table.
      
      Still more work to be done:
      - do we want creating and deleting to be put into this type of system?
      Quite possible
      - we still don't know if a query is running against the cached table
      that can prevent dropping the table. Perhaps using some delay to give
      time for any running query to finish. I don't think we can easily solve
      this in general because another instance in the cluster could be
      querying against it and we don't have any quick pub/sub type of
      information sharing. DB writes would be quite heavy.
      - clean up the ddl.i/unpersist method in the same way we did refresh and
      persist. Not quite clear what to do about errors, return values, etc.
      
      * Update tests with more job-info in context
      
      * Fix URL type conflicts
      
      * Whoops get rid of our Thread/sleep test :)
      
      * Some tests for the new job-data, clean up task history saving
      
      * Fix database model persistence button states (#21636)
      
      * Use plain database instance on form
      
      * Fix DB model persistence toggle button state
      
      * Add common `getSetting` selector
      
      * Don't show caching button when turned off globally
      
      * Fix text issue
      
      * Move button from "Danger zone"
      
      * Fix unit test
      
      * Skip default setting update request for model persistence settings (#21669)
      
      * Add a way to skip default setting update request
      
      * Skip default setting update for persistence
      
      * Add changes for front end persistence
      
      - Order by refresh_begin descending
      - Add endpoint /persist/:persisted-info-id for fetching a single entry.
      
      * Move PersistInfo creation into interface function
      
      * Hide model cache monitoring page when caching is turned off (#21729)
      
      * Add persistence setting keys to `SettingName` type
      
      * Conditionally hide "Tools" from admin navigation
      
      * Conditionally hide caching Tools tab
      
      * Add route guard for Tools
      
      * Handle missing settings during init
      
      * Add route for fetching persistence by card-id
      
      * Wrangling persisted-info states
      
      Make quartz jobs handle any changes to database.
      Routes mark persisted-info state and potentially trigger jobs.
      Job read persisted-info state.
      
      Jobs
      
      - Prune
      -- deletes PersistedInfo `deleteable`
      -- deletes cache table
      
      - Refresh
      -- ignores `deletable`
      -- update PersistedInfo `refreshing`
      -- drop/create/populate cache table
      
      Routes
      
      card/x/persist
      - creates the PersistedInfo `creating`
      - trigger individual refresh
      
      card/x/unpersist
      - marks the PersistedInfo `deletable`
      
      database/x/unpersist
      - marks the PersistedInfos `deletable`
      - stops refresh job
      
      database/x/persist
      - starts refresh job
      
      /persist/enable
      - starts prune job
      
      /persist/disable
      - stops prune job
      - stops refresh jobs
      - trigger prune once
      
      * Save the definition on persist info
      
      This removes the columns and query_hash columns in favor of definition.
      
      This means, that if the persisted understanding of the model is
      different than the actual model during fetch source query we won't
      substitute.
      
      This makes sure we keep columns and datatypes in line.
      
      * Remove columns from api call
      
      * Add a cache section to model details sidebar (#21771)
      
      * Extract `ModelCacheRefreshJob` type
      
      * Add model cache section to sidebar
      
      * Use `ModelCacheRefreshStatus` type name
      
      * Add endpoint to fetch persistence info by model ID
      
      * Use new endpoint at QB
      
      * Use `CardId` from `metabase-types/api`
      
      * Remove console.log
      
      * Fix `getPersistedModelInfoByModelId` selector
      
      * Use `t` instead of `jt`
      
      * Provide seam for prune testing
      
      - Fix spelling of deletable
      
      * Include query hash on persisted_info
      
      we thought we could get away with just checking the definition but that
      is schema shaped. So if you changed a where clause we should invalidate
      but the definition would be the same (same table name, columns with
      types).
      
      * Put random hash in PersistedInfo test defaults
      
      * Fixing linters
      
      * Use new endpoint for model cache refresh modal (#21742)
      
      * Use new endpoint for cache status modal
      
      * Update refresh timestamps on refresh
      
      * Move migration to 44
      
      * Dispatch on initialized driver
      
      * Side effects get bangs!
      
      * batch hydrate :persisted on cards
      
      * bang on `models.persisted-info/make-ready!`
      
      * Clean up a doc string
      
      * Random fixes: docstrings, make private, etc
      
      * Bangs on side effects
      
      * Rename global setting to `persisted-models-enabled`
      
      felt awkward (enabled-persisted-models) and renamed to make it a bit
      more natural. If you are developing you need to set the new value to
      true and then your state will stay the same
      
      * Rename parameter for site-uuid-str for clarity
      
      * Lint cleanups
      
      interesting that the compojure one is needed for clj-kondo. But i guess
      it makes sense since there is a raw `GET` in `defendpoint`.
      
      * Docstring help
      
      * Unify type :type/DateTimeWithTZ and :type/DateTimeWithLocalTZ
      
      both are "TIMESTAMP WITH TIME ZONE". I had got an error and saw that the
      type was timestamptz so i used that. They are synonyms although it might
      require an extension.
      
      * Make our old ns linter happy
      
      Co-authored-by: default avatarAlexander Polyankin <alexander.polyankin@metabase.com>
      Co-authored-by: default avatarAnton Kulyk <kuliks.anton@gmail.com>
      Co-authored-by: default avatarCase Nelson <case@metabase.com>
      Unverified
      c504a12e
  5. Apr 27, 2022
    • dpsutton's avatar
      Fix errors when downgrading then upgrading to bigquery driver (#22121) · 2ada1fc4
      dpsutton authored
      This issue has a simple fix but a convoluted story. The new bigquery
      driver handles multiple schemas and puts that schema (dataset-id) in the
      normal spot on a table in our database. The old driver handled only a
      single schema by having that dataset-id hardcoded in the database
      details and leaving the schema slot nil on the table row.
      
      ```clojure
      ;; new driver describe database:
      [{:name "table-1" :schema "a"}
       {:name "table-2" :schema "b"}]
      
      ;; old driver describe database (with dataset-id "a" on the db):
      [{:name "table-1" :schema nil}]
      ```
      
      So if you started on the new driver and then downgraded for some reason,
      the table sync would see you had tables with schemas, but when it
      enumerated the tables in the database on the next sync, would see tables
      without schemas. It did not unify these two together, nor did it archive
      the tables with a schema. You ended up with both copies in the
      database, all active.
      
      ```clojure
      [{:name "table-1" :schema "a"}
       {:name "table-2" :schema "b"}
       {:name "table-1" :schema nil}]
      ```
      
      If you then tried to migrate back to the newer driver, we migrated them
      as normal: since the old driver only dealt with one schema but left it
      nil, put that dataset-id on all of the tables connected to this
      connection.
      
      But since the new driver and then the old driver created copies of the
      same tables, you would end up with a constraint violation: tables with
      the same name and, now after the migration, the same schema. Ignore this
      error and the sync in more recent versions will correctly inactivate the
      old tables with no schema.
      
      ```clojure
      [{:name "table-1" :schema "a"}  <-|
       {:name "table-2" :schema "b"}    | constraint violation
       {:name "table-1" :schema "a"}] <-|
      
      ;; preferrable:
      [{:name "table-1" :schema "a"}
       {:name "table-2" :schema "b"}
       {:name "table-1" :schema nil :active false}]
      ```
      Unverified
      2ada1fc4
  6. Feb 17, 2022
    • adam-james's avatar
      Events migration (#20452) · 5a67efea
      adam-james authored
      
      * Migrations for the new 'Annotate Useful Events' project
      
      Users will be able to create 'events' in timelines, which are accessible via collections.
      
      Two migrations are introduced to facilitate the new feature: timeline table, and event table. The following columns
      are presumed necessary based on the design document:
      
      ** TimeLine
          - id
          - name
          - description
          - collection_id
          - archived
          - creator_id
          - created_at
          - updated_by ??
          - updated_at
      
      *** Event
          - id
          - timeline_id
          - name
          - description markdown (max length 255 for some reason)
          - date
          - time is optional
          - timezone (optional)
          - icon (set of 6)
          - archived status
          - created by
          - created at
          - created through: api or UI
          - modified by ??
          - modified at
      
      * Changes to events schema
      
      - add icon onto timeline
      - make icon on event nullable
      - just move to a single timestamp on event, no boolean or anything
      - rename event table to `timeline_events` since "event" is so generic
      
      * dir-locals indentation
      
      * Timeline api and model
      
      - patched up the migration to make collection_id nullable since that is
      the root collection
      - followed the layout of api/metric and did the generic model stuff
      
      * Select keys with keywords, not destructured nils :(
      
      also, can't return the boolean from the update, but select it again
      
      * Enable the automatic updated_at
      
      * Boolean for "time matters" and string timezone column on events
      
      * clean up migration, rename modified_at -> updated_at
      
      for benefit of `:timestamped? true` bits
      
      * basic timeline event model namespace
      
      * Timeline Events initial API
      
      Just beginning the basics for the API for timeline events.
      
       - need to find a way to check permissions
       - have to still check if the endpoint is returning stuff properly
      
      * Singularize timeline_event tablename
      
      * Timeline events api hooked up
      
      * Singularize timeline event api namespace
      
      * unused comment
      
      * indent spec maps on routes
      
      * Make name optional on timeline PUT
      
      * Update collection api for timelines endpoints
      
      - add /root/timelines endpoint for when collection id is null
      - add /:id/timelines endpoint
      
      Created a hydration function to hydrate the timeline events when fetching timelines.
      
      * TimelineEvent permissions
      
      - crate a permissions objects set via the timeline's permissions
      
      * Move to using new permissions setup
      
      previously was explicitly checking permissions of the underlying
      timeline from the event, but now we just do the standard read/write
      check on the event, and the permissions set knows it is based on the
      permission set of the underlying timeline (which is based on the
      permissions set of the underlying collection)
      
      * Items api includes timelines
      
      * Strip of icon from other rows
      
      * Indices on timeline and timeline_event
      
      timeline access by collection_id
      timeline_event by both timeline_id and (timeline_id, timestamp) for when
      looking for events within a certain range. We will always have been
      narrowed by timeline ids (per the collection containing the question, or
      by manually enabling certain timelines) so we don't need a huge index on
      the timestamp itself.
      
      * Skeleton of range-based query
      
      * Initial timeline API tests
      
      Began with some simple Auth tests and basic GET tests, using a Collection with an id as well as the special root collection.
      
      * Fix docstring on api timeline namespace
      
      * Put timeline's events at `:events` not `timeline-events`
      
      * Add api/card/:id/timelines
      
      At the moment api/card/:id/timelines and api/collection/:id/timelines do
      the same thing: they return all of the timelines in the collection or in
      the collection belonging to the card. In the future this may drift so
      they share an implementation at the moment.
      
      * Put creator on timeline
      
      on api endpoints for timeline by id and timelines by card and collection
      
      * Hydrate creator on events
      
      Bit tricky stuff here. The FE wants the events at "events" not
      "timeline-events" i guess due to the hyphen that js can never quite be
      sure isn't subtraction.
      
      To use the magic of nested hydration
      
      `(hydrate timelines [:events :creator])`,
      
      the events have to be put at the declared hydration key. I had wanted
      the hydration `:timeline-events` for clarity in the code but want
      `:events` to be the key. But when you do this, the hydration thinks it
      didn't do any work and cannot add the creator. So the layout of the
      datastructure is tied to the name of the hydration key. Makes sense but
      I wish I had more control since `:events` is so generic.
      
      * Add include param check to allow timeline GET to hydrate w/ events.
      
      The basic check is in place for include=events in the following ways:
      
      GET /api/timeline/:id?include=events
      GET /api/collection/root/timelines?include=events
      
      If include param is omitted, timelines are hydrated only with the creator. Events are always hydrated with creator.
      
      * fix hyphen typo in api doc
      
      * Change schema for include=events to s/enum to enforce proper param
      
      Had used just a s/Str validation, which allows the include parameter value to be anything, which isn't great. So,
      switched it to an enum. Can always change this later to be a set of valid enums, similar to the `model` parameter on
      collection items.
      
      * Fixed card/:id/timelines failing w/ wrong arity
      
      The `include` parameter is passed to `timeline/timelines-for-collection` to control hydration of the events for that
      timeline. Since the card API currently calls this function too, it requires that parameter be passed along.
      
      As well, I abstracted the `include-events-schema` and have it in metabase.api.timeline. Not married to it, but since
      the schema was being used across timeline, collection, and card API, it seems like a good idea to define it in one
      place only. Not sure if it should be metabase.api.timeline or metabase.models.timeline
      
      * used proper migration numbers (claimed in migrations slack channel)
      
      * Add archived=true functionality
      
      There's one subtle issue remaining, and that is that when archived=true, :creator is not hydrated on the events.
      
      I'll look into fixing that.
      
      In the meantime, the following changes are made:
      
       - :events returs empty list `[]` instead of `null` when there are no events in a timeline
       - archived events can be fetched via `/api/timeline/:id?include=events&archived=true`
       - similarly, archived events can be fetched with:
         - `/api/collection/root|:id/timelines?include=events&archived=true`
         - `/api/card/:id/timelines?include=events&archived=true`
      
      Just note the caveat for the time being (no creator hydrated on events when fetching archived) Fix pending.
      
      * Altered the hydration so creator is always part of a hydrated event
      
      Adjusted the hydration of timelines as follows:
      
      - hydration always grabs all events
      - at the endpoint's implementation function, the timeline(s) are altered by filtering on archived true or false
      
      Not sure if this is the best approach, but for now it should work
      
      * Create GET endpoint for /api/timeline and allow archived=true
      
      Added a missed GET endpoint for fetching all the timelines (optionally fetch archived)
      
      * reformat def with docstring
      
      * Timeline api updated to properly filter archived/un-archived
      
      Use archived=true pattern in the query params for the timeline endpoint to allow FE to get what they need.
      
      Work in progress on the API tests too.
      
      * Timeline-events API tests
      
      Timeline Events API endpoint tests, at least the very basics.
      
      * Timeline Hydration w/ Events tests
      
      Added a test for Events hydration in the timeline API test namespace.
      
      * TimelineEvent Model test NS
      
      May be a bit unnecessary to test the hydrate function, but the namespace is there in case we need to add more tests
      over time.
      
      Also adjusted out a comment and added a library to timeline_test ns
      
      * Added metabase.models.timeline-test NS
      
      Once again, this may be a bit unnecessary as a test ns for now, but could be the right place to add tests if the
      feature grows in the future.
      
      * Clean up handling of archived
      
      - we only want to show archived events when viewing a timeline by id and
      specifically asking for archived events. Presumably this is some
      administrative screen. We don't want to allow all these options when
      viewing a viz as its just overload and they are archived for a
      reason. If FE really want them they can go by id of the timeline. Note
      it will return all events, not just the archived ones.
      - use namespaced keywords in the options map. Getting timelines
      necessarily requires getting their events sometimes. And to avoid
      confusion about the archived state, we have options like
      `:timeline/events?`, `:timeline/archived?`, `:events/start`,
      `:events/end`, and `:events/all?` so they are all in one map and not
      nested or ambiguous.
      
      * Clean up some tests
      
      noticable differences: timelines with archived=true return all events as
      if it is "include archived" rather than only show archived.
      
      * PUT /api/timeline/:id now archives all events when TL is archived
      
      We can archive/un-archive timelines, and all timeline events associated with that timeline will follow suit.
      
      This does lose state when, for example, there is a mix of archived/un-archived events, as there is no notion of
      'archived-by' to check against. But, per discussion in the pod-discovery slack channel, this is ok for now. It is also
      how we handle other archiving scenarios anyway, with items on collections being archived when a collection is archived.
      
      * cleanup tests
      
      * Include Timeline and TimelineEvent in models
      
      * Add tt/WithTempDefaults impls for Timeline/TimelineEvent
      
      lets us remove lots of the `(merge defaults {...})` that was in the code.
      
      Defaults are
      
      ```clojure
         Timeline
         (fn [_]
           {:name       "Timeline of bird squawks"
            :creator_id (rasta-id)})
      
         TimelineEvent
         (fn [_]
           {:name         "default timeline event"
            :timestamp    (t/zoned-date-time)
            :timezone     "US/Pacific"
            :time_matters true
            :creator_id   (rasta-id)})
      ```
      
      * Add timeline and timeline event to copy infra
      
      * Timeline Test checking that archiving a Timeline archives its events
      
      A Timeline can be archived and its events should also be archived.
      A Timeline can also be unarchived, which should also unarchive the events.
      
      * Docstring clarity on apis
      
      * Remove commented out form
      
      * Reorder migrations
      
      * Clean ns
      
      clj-kondo ignores those but our linter does not
      
      * Correct casing on api schema for include
      
      * Ensure cleanup of timeline from test
      
      * DELETE for timeline and timeline-event
      
      * Poison collection items timeline query when is_pinned
      
      timelines have no notion of pinning and were coming back in both queries
      for pinned and not pinned. This adds a poison clause 1=2 when searching
      for pinned timelines since none are but they don't have a column saying
      that.
      
      * Clean up old comment and useless tests
      
      comment still said to poison the query when getting pinned state and
      that function had been added.
      
      Tests were asserting count = 2 and also the set of names had two things
      in them. Close enough since there are no duplicate names
      
      * Use TemporalString schema on timeline event routes
      
      Co-authored-by: default avatardan sutton <dan@dpsutton.com>
      Unverified
      5a67efea
  7. Dec 28, 2021
    • dpsutton's avatar
      Handle nested queries which have agg at top level and nested (#19437) · 50818a92
      dpsutton authored
      * Handle nested queries which have agg at top level and nested
      
      Previously when matching columns in the outer query with columns in the
      inner query we had use id, and then recently, field_ref. This is
      problematic for aggregations.
      
      Consider https://github.com/metabase/metabase/issues/19403
      
      The mbql for this query is
      
      ```clojure
      {:type :query
       :query {:aggregation [[:aggregation-options
                              [:count]
                              {:name "count"}]
                             [:aggregation-options
                              [:avg
                               [:field
                                "sum"
                                {:base-type :type/Float}]]
                              {:name "avg"}]]
               :limit 10
               :source-card-id 1960
               :source-query {:source-table 1
                              :aggregation [[:aggregation-options
                                             [:sum
                                              [:field 23 nil]]
                                             {:name "sum"}]
                                            [:aggregation-options
                                             [:max
                                              [:field 28 nil]]
                                             {:name "max"}]]
                              :breakout [[:field 26 nil]]
                              :order-by [[:asc
                                          [:field 26 nil]]]}}
       :database 1}
      ```
      
      The aggregations in the top level select will be type checked as :name
      "count" :field_ref [:aggregation 0]. The aggregations in the nested
      query will be turned into :name "sum" :field_ref [:aggregation 0]! This
      is because aggregations are numbered "on their level" and not
      globally. So when the fields on the top level look at the metadata for
      the nested query and merge it, it unifies the two [:aggregation 0]
      fields but this is INCORRECT. These aggregations are not the same, they
      just happen to be the first aggregations at each level.
      
      Its illustrative to see what a (select * from (query with aggregations))
      looks like:
      
      ```clojure
      {:database 1
       :query {:source-card-id 1960
               :source-metadata [{:description "The type of product, valid values include: Doohicky, Gadget, Gizmo and Widget"
                                  :semantic_type :type/Category
                                  :coercion_strategy nil
                                  :name "CATEGORY"
                                  :field_ref [:field 26 nil]
                                  :effective_type :type/Text
                                  :id 26
                                  :display_name "Category"
                                  :fingerprint {:global {:distinct-count 4
                                                         :nil% 0}
                                                :type {:type/Text {:percent-json 0
                                                                   :percent-url 0
                                                                   :percent-email 0
                                                                   :percent-state 0
                                                                   :average-length 6.375}}}
                                  :base_type :type/Text}
                                 {:name "sum"
                                  :display_name "Sum of Price"
                                  :base_type :type/Float
                                  :effective_type :type/Float
                                  :semantic_type nil
                                  :field_ref [:aggregation 0]}
                                 {:name "max"
                                  :display_name "Max of Rating"
                                  :base_type :type/Float
                                  :effective_type :type/Float
                                  :semantic_type :type/Score
                                  :field_ref [:aggregation 1]}]
               :fields ([:field 26 nil]
                        [:field
                         "sum"
                         {:base-type :type/Float}]
                        [:field
                         "max"
                         {:base-type :type/Float}])
               :source-query {:source-table 1
                              :aggregation [[:aggregation-options
                                             [:sum
                                              [:field 23 nil]]
                                             {:name "sum"}]
                                            [:aggregation-options
                                             [:max
                                              [:field 28 nil]]
                                             {:name "max"}]]
                              :breakout [[:field 26 nil]]
                              :order-by [[:asc
                                          [:field 26 nil]]]}}
       :type :query
       :middleware {:js-int-to-string? true
                    :add-default-userland-constraints? true}
       :info {:executed-by 1
              :context :ad-hoc
              :card-id 1960
              :nested? true
              :query-hash #object["[B" 0x10227bf4 "[B@10227bf4"]}
       :constraints {:max-results 10000
                     :max-results-bare-rows 2000}}
      ```
      
      The important bits are that it understands the nested query's metadata
      to be
      
      ```clojure
      {:name "sum"
       :display_name "Sum of Price"
       :field_ref [:aggregation 0]}
      {:name "max"
       :display_name "Max of Rating"
       :field_ref [:aggregation 1]}
      ```
      
      And the fields on the outer query to be:
      ```clojure
      ([:field
        "sum"
        {:base-type :type/Float}]
       [:field
        "max"
        {:base-type :type/Float}])
      ```
      
      So there's the mismatch: the field_ref on the outside is [:field "sum"]
      but on the inside the field_ref is [:aggregation 0]. So the best way to
      match up when "looking down" into sub queries is by id and then by name.
      
      * Some drivers return 4.0 instead of 4 so make them all ints
      
      * Handle dataset metadata in a special way
      
      rather than trying to set confusing merge rules, just special case
      metadata from datasets.
      
      Previously, was trying to merge the "preserved keys" on top of the
      normal merge order. This caused lots of issues. They were
      trivial. Native display names are very close to the column name, whereas
      mbql names go through some humanization. So when you select price
      from (select PRICE ...) its an mbql with a nested native query. The
      merge order meant that the display name went from "Price"
      previously (the mbql nice name for the outer select) to "PRICE", the
      underlying native column name. Now we don't special case the display
      name (and other fields) of regular source-metadata.
      
      Also, there were issues with mbql on top of an mbql dataset. Since it is
      all mbql, everything is pulled from the database. So if there were
      overrides in the nested mbql dataset, like description, display name,
      etc, the outer field select already had the display name, etc. from the
      database rather than allowing the edits to override from the nested
      query.
      
      Also, using a long `:source-query/dataset?` keyword so it is far easier
      to find where this is set. With things called just `:dataset` it can be
      quite hard to find where these keys are used. When using the namespaced
      keyword, greping and finding usages is trivial. And the namespace gives
      more context
      Unverified
      50818a92
  8. Dec 15, 2021
  9. Sep 14, 2021
  10. Sep 09, 2021
  11. Aug 11, 2021
    • dpsutton's avatar
      Changing type (#16776) · c60dfc5f
      dpsutton authored
      * Move sync executor to bespoke namespace
      
      * Refingerprint fields on type change
      
      * Check if can connect in refingerprint-field!
      
      * docstring update
      
      * Cleanup tests a bit
      
      * Error handling
      
      * Table field values endpoint on sync.concurrent executor
      
      * ns sort api field
      
      * ns cleanup
      Unverified
      c60dfc5f
  12. Aug 04, 2021
    • Cam Saul's avatar
      Minor test runner/config fixes (#17323) · ad69111e
      Cam Saul authored
      * Fix JUnit output not correctly stripping ANSI color codes
      
      * Remove find-tests profiling log message
      
      * 1-arg arity of metabase.test-runner/run
      
      * Fix bug with metabase.test-runner/find-tests with a single test name
      Unverified
      ad69111e
  13. Jul 30, 2021
  14. Jun 22, 2021
    • dpsutton's avatar
      Move *ignore-cached-results* to middleware (#16695) · 83f948ff
      dpsutton authored
      * Move *ignore-cached-results* to middleware
      
      Had tried to adjust the binding to work, but the binding ended before
      all of the bound functions and such were submitted to work, so when
      that code actually executed the binding was no longer in
      place. Annoying and it should be in middleware anyways
      
      * Remove now unused import (cache became keyword in middleware not binding)
      Unverified
      83f948ff
  15. Jun 02, 2021
    • dpsutton's avatar
      Authority Level on collections (formerly Typed collections) (#16191) · ad753b18
      dpsutton authored
      * Add type on collection
      
      * Search with collection type
      
      * Cleanup bounded heap accumulator
      
      * Make search conditionally bump official collection items
      
      * collection api and tests
      
      * Put collection type onto hydrated cards on dashboards
      
      * Move official collection type scoring into ee codebase
      
      * ensure ee and oss agree when not official collection
      
      * Mark Collections tree with type
      
      * Remove unnecessary `and`s when no check for enhancements
      
      * Tests for setting collection tree type
      
      * Include hydration for collection type on dashboards
      
      * Make sure created test collections are cleaned up
      
      * Cleanup tests, don't search on collection_type
      
      looks for all text fields and searches the term against that. official
      would bring back all official types. no bueno
      
      * Docstring on protocol and don't shadow comparator
      
      * update to new ee impl var passing style
      
      * Collection model ensures no type change on personal collection
      
      * Check for personal collection when updating type
      
      model checks for personal collection and rejects in the update but
      doesn't check for children of personal collection.
      
      * Update checking for unchangeable properties of a personal collection
      
      * Cleanup: type collection tree, batched hydration, combine error checks
      
      * Cleanup
      
      * move bounded-heap accumulator to utils
      
      * switch to test.check testing
      
      * Bad test just to see what our CI will report
      
      * remove purposeful failing test (was for CI)
      
      * collection.type -> collection.authority_level
      
      * Test the actual ee scoring impl, not the wrapped one
      
      locally i had enhancements enabled so the tests were passing, but in
      CI it was failing as it did not have enhancements enabled
      Unverified
      ad753b18
  16. May 17, 2021
    • Cam Saul's avatar
      Add Semantic/* and Relation/* ancestor types (#15994) · 9700fe5b
      Cam Saul authored
      * Port legacy data type migrations -> Liquibase
      
      * Fix migration IDs
      
      * Field type validation/error handling
      
      * Have semantic type fallback to nil
      
      * Fix semantic-type-migrations-test
      
      * Fix migrations
      
      * Revert accidental changes
      
      * Semantic/* & Relation/* ancestor types
      
      * Fix stray Relation/PK and Relation/FKs
      
      * Semantic/* and Relation/* ancestor types
      
      * cljs test fix
      
      * Fix :require
      
      * FE test fixes :wrench:
      
      * Test fixes :wrench:
      
      * prettier
      
      * PR  f e e d b a c k
      
      * Use medium size CircleCI image for Presto to prevent all the OOMs
      
      * Backport dir-locals tweaks from hierarchy PR
      
      * Redshift: only sync the test schema (faster CI and fix failure)
      
      * Better error handling for sync in tests
      
      * Revert accidental commit
      
      * Redshift test fixes :wrench:
      Unverified
      9700fe5b
  17. Apr 13, 2021
    • dpsutton's avatar
      Remove unixtimestamp type (#15533) · 4910a3e2
      dpsutton authored
      * Remove type/UNIX* and type/ISO8601* from frontend
      
      * Set effective-type and coercion strategy when syncing
      
      these values will most likely only be available when using tests, as
      metadata is very unlikely to have this. As far as I can tell the
      toucannery apparatus is the only bit that has this. Its quite
      artificial. I'm not sure if this is a good idea.
      
      * :type/UNIXTimestampSeconds and type/ISO8601DateTimeString out of type
      
      remove the coercions from the type hierarchy. This brought up a
      strange issue that has been present for a while: all liquidbase
      migrations run and then all data migrations run. They are not
      interleaved. This allows for the following scenario:
      
      - data migration migrates all X values to Y in version 26
      - liquibase migration migrates all Y values to Z in version 28
      
      But these are not run in version order, but all liquibase migrations
      are run and then all data migrations are run. If you were on an old
      version for a while, you could end up with Y values even though the
      liquibase migration which comes "after" the data migration turns all Y
      values into Z values.
      
      This impacts this change because a data migration in this way:
      - liquibase changesets 287, 288, 289, and 290 remove all 'type/UNIX*'
      and 'type/ISO8601*' semantic types. These were in 0.39
      - data migration `migrate-field-types` added in 0.20.0 was looking for
      special_type's of "timestamp_milliseconds" and migrating them to
      "type/UNIXTimestampMilliseconds".
      
      Since the liquibase runs before the migrate-field-types, it's possible
      for a 'type/UNIX*' semantic type to reappear. And since these were
      removed from the type system it would fail validation and blow up. In
      this case it actually can't happen since the field its attempting to
      migrate (special_type) no longer exists. But since the migrations are
      not interleaved this problem still exists.
      
      * whatever prettier
      
      * Appease the machines
      
      * Restore the unix and iso semantic types to hierarchy
      
      migration tests still use these
      `(impl/test-migrations [283 296] [migrate!] ...)`
      
      A few things in tension: the field requires valid semantic_types,
      these _were_ valid semantic_types until v39.
      
      * Remove old "coercion" semantic types
      
      * Migrate the v20 version semantic types for unix epoch
      
      * Time travelling migrations
      
      these target v20 and v29 which at the time they existed had a column
      special type not semantic type. But these run after all of the schema
      migrations, not interleaved, so they will have a semantic_type not
      special_type when they actually run even though they target v20 or v29
      data. strange world
      
      * add migration's new checksum
      Unverified
      4910a3e2
  18. Apr 12, 2021
    • Dalton's avatar
      Dashboard parameter field filter operators feature flag (#15519) · a362e2f3
      Dalton authored
      
      * Backend feature flag for new field filters
      
      * Feature flag new parameter options
      
      When the "field-filter-operators-enabled?" flag is disabled we do the following:
      1. Replace new operator options with old category and location/city, etc., options
         in the PARAMETER_OPTIONS list found in metabase/meta/Parameter.js
      2. Hide numbers section in the PARAMETER_SECTIONS list found in
         metabase/meta/Dashboard.js
      3. Return args as-is in the mapUIParameterToQueryParameter function
         found in metabase/meta/Parameter.js
      
      React/UI code handles both old options and new options so doesn't need
      to change. Old parameter types like "category" and "location/city" are
      treated like "string/=" in the UI but retain their own parameter type
      when used to send a new query.
      
      * Fix FE issues caused by meta/Parameter refactor
      
      * mock the field operator param flag to make tests pass
      
      * add/fix cypress tests
      
      * fix import in ParametersPopover
      
      * update widget tag type
      
      * Enable field filter operators for cypress tests
      
      * Question marks are questionable
      
      * Conditionally use category or string/= if field filters are enabled
      
      * rmv mocks where we don't need them
      
      * rmv mock from chained-filters test
      
      * env vars as string in project.clj, alignment
      
      Co-authored-by: default avatardan sutton <dan@dpsutton.com>
      Unverified
      a362e2f3
  19. Mar 31, 2021
  20. Mar 23, 2021
  21. Feb 25, 2021
    • Cam Saul's avatar
      Optimize relative datetime filters (#14835) · 538e5e38
      Cam Saul authored
      1. Rename optimize-datetime-filters middleware -> optimize-temporal-filters (it's more accurate, because this also
      optimizes date or time filter clauses)
      
      2. optimize-temporal-filters middleware now optimizes relative-datetime clauses (which represent a moment in time
      relative to when the query is ran, e.g. "last month") in addition to absolute-datetime clauses (which represent an
      absolute moment in time, e.g. 2021-02-15T14:40:00-08:00) . This middleware rewrites queries so we filter against
      specific temporal ranges without casting the column itself, meaning we can leverage indexes on that column. See #11837
      for more details
      
      3. Added new validate-temporal-bucketing middleware that throws an Exception if you try to do something that makes no
      sense, e.g. bucket a DATE field by :time or a TIME field by :month. This is a better situation then running the query
      and waiting for the DB to complain. (In practice, I don't think the FE client would let you generate a query like this
      in the first place)
      
      4. Fix random test failures for MySQL in task-history-cleanup-test
      Unverified
      538e5e38
  22. Jan 15, 2021
    • dpsutton's avatar
      Sync startup cleanups (#14428) · 4028dc65
      dpsutton authored
      * Log db info from db query
      
      we're already making the query might as well also get the information
      we want to log
      
      * Alignment settings in .dir-locals
      
      * Inverted logic for logging fingerprint vs refingerprint
      Unverified
      4028dc65
  23. Jan 14, 2021
  24. Jan 07, 2021
  25. Jan 05, 2021
  26. Oct 05, 2020
    • Paul Rosenzweig's avatar
      Dashboard interactivity: custom drill through and chained filters (#13354) · c4353a1f
      Paul Rosenzweig authored
      
      * dashboard interactivity: custom drill through and chained filters
      
      (cherry picked from commit f3ea3099da981c967fcffd5c7e318c88064c0a96)
      
      * add missing param
      
      * Better error message & test for cases where you don't have Field perms
      
      (cherry picked from commit 2ae8174c388a105170bb316b6266b900f8844c62)
      
      * Fix lint error
      
      (cherry picked from commit aa967359ecf9705f28dded2b5b30eb42ebec156e)
      
      * type another letter to avoid ordering issue
      
      (cherry picked from commit 5f285219c07eeff41b3d1fbfc091092a6ce60ed4)
      
      * fix failing cy tests
      
      (cherry picked from commit 8dbb9561a21f853a550d4ab1385fbdf142ea36e6)
      
      * update test
      
      * Fix merge
      
      * sort correctly
      
      * Fix namespace decl sorting
      
      * use another within block for the "add to dash" modal
      
      Co-authored-by: default avatarCam Saul <github@camsaul.com>
      Unverified
      c4353a1f
  27. Jul 20, 2020
    • Robert Roland's avatar
      Update eslint (#12951) · d86b85ce
      Robert Roland authored
      
      * Update eslint
      
      Updates eslint, babel and plugins to the latest compatible versions
      
      Drops the 'no-color-literals' parameter which doesn't exist (looks like
      it's actually part of eslint-plugin-react-native which we don't use)
      
      adding a dirlocal to make sure js2-mode doesn't confuse you with type
      errors that aren't actually errors because of flowtype and such
      
      * update generated css classes in snapshots
      
      Co-authored-by: default avatarPaul Rosenzweig <paul.a.rosenzweig@gmail.com>
      Unverified
      d86b85ce
  28. Feb 19, 2020
  29. Dec 03, 2019
  30. Oct 07, 2019
  31. Jul 29, 2019
  32. Jun 05, 2019
  33. Mar 13, 2019
  34. Dec 19, 2017
  35. Feb 01, 2017
  36. Jan 27, 2017
  37. Dec 06, 2016
  38. Nov 02, 2016
  39. Oct 19, 2016
Loading