Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Jun 08, 2022
  2. Jun 07, 2022
    • Case Nelson's avatar
      Include :features whether or not db is initialized (#23128) · f0a1406e
      Case Nelson authored
      * Include :features whether or not db is initialized
      
      On restart, drivers may not be initialized but admins still need to know
      what features are available. Always include features on db post-select.
      
      * Driver can be null when not selected
      
      * Remove intitialized check for :details as well
      
      Both driver.u/features and driver/normalize-db-details eventually call
      `dispatch-on-initialized-driver` which will call
      `initialize-driver-if-needed`, removing the need for the check.
      
      * Unregistered drivers can make it in here, but they can be abstract so driver/available is not apropriate
      Unverified
      f0a1406e
    • Howon Lee's avatar
      Do trivial identity for passthrough of symbols for nested field column instead... · ea4825a3
      Howon Lee authored
      Do trivial identity for passthrough of symbols for nested field column instead of default processing (#23136)
      
      Should whack both #23026 and #23027.
      
      Previous (default) behavior of the reducible query that gets the limited contents of the JSON to break out the nested field columns is to lowercase identifiers. This is root cause of #23026 and #23027.
      
      But we want the proper case of those identifiers in order to be modifying the query correctly when we query the JSON. So we set the reducible query to just pass through the identifiers instead of default behavior.
      Unverified
      ea4825a3
    • Luiz Arakaki's avatar
  3. Jun 06, 2022
  4. Jun 03, 2022
  5. Jun 02, 2022
  6. Jun 01, 2022
  7. May 31, 2022
    • Braden Shepherdson's avatar
      Add entity_id columns to serialized tables with external IDs (#22762) · 911892b8
      Braden Shepherdson authored
      That is: collection, dimension, metric, native_query_snippet, pulse,
      report_card, report_dashboard, report_dashcard, segment, timeline
      
      Notably that doesn't include database, table, or field, since those all
      have external unique IDs that are used instead.
      Unverified
      911892b8
    • Case Nelson's avatar
      Include field annotations for native queries too (#22962) · 87d4e587
      Case Nelson authored
      * Include field annotations for native queries too
      
      Persistence will replace a source-table source-query with a native
      query, but preprocess has still filled in source-metadata with all of
      the relevant field-ids expected to be returned. With this change we
      include field info from the store in the same way that mbql-cols does.
      This allows persisted models to honor field settings like `:visibility
      :details-only`.
      
      * Force type of merge-source-metadata-col to map
      
      By doing the lookup to store/field at the top of the merge, the type of
      annotations coming through was a FieldInstance. Tests, at least, were
      unhappy about this and it's better not to change it.
      
      * Resolve fields for ids in source-metadata
      
      Makes sure that the qp/store has all the available fields for
      annotations.
      
      * Recursively find source-metadata field-ids for annotations
      
      * Use transducer as per review
      Unverified
      87d4e587
    • dpsutton's avatar
      Fix deadlock in pivot table connection management (#22981) · a15fc4ea
      dpsutton authored
      Addresses part of https://github.com/metabase/metabase/issues/8679
      
      Pivot tables can have subqueries that run to create tallies. We do not
      hold the entirety of resultsets in memory so we have a bit of an
      inversion of control flow: connections are opened, queries run, and
      result sets are transduced and then the connection closed.
      
      The error here was that subsequent queries for the pivot were run while
      the first connection is still held open. But the connection is no longer
      needed. But enough pivots running at the same time in a dashboard can
      create a deadlock where the subqueries need a new connection, but the
      main queries cannot be released until the subqueries have completed.
      
      Also, rf management is critical. It's completion arity must be called
      once and only once. We also have middleware that need to be
      composed (format, etc) and others that can only be composed
      once (limit). We have to save the original reducing function before
      composition (this is the one that can write to the download writer, etc)
      but compose it each time we use it with `(rff metadata)` so we have the
      format and other middleware. Keeping this distinction in mind will save
      you lots of time. (The limit query will ignore all subsequent rows if
      you just grab the output of `(rff metadata)` and not the rf returned
      from the `:rff` key on the context.
      
      But this takes the following connection management:
      
      ```
      tap> "OPENING CONNECTION 0"
      tap> "already open: "
        tap> "OPENING CONNECTION 1"
        tap> "already open: 0"
        tap> "CLOSING CONNECTION 1"
        tap> "OPENING CONNECTION 2"
        tap> "already open: 0"
        tap> "CLOSING CONNECTION 2"
        tap> "OPENING CONNECTION 3"
        tap> "already open: 0"
        tap> "CLOSING CONNECTION 3"
      tap> "CLOSING CONNECTION 0"
      ```
      
      and properly sequences it so that connection 0 is closed before opening
      connection 1.
      
      It hijacks the executef to just pass that function into the reducef part
      so we can reduce multiple times and therefore control the
      connections. Otherwise the reducef happens "inside" of the executef at
      which point the connection is closed.
      
      Care is taken to ensure that:
      - the init is only called once (subsequent queries have the init of the
      rf overridden to just return `init` (the acc passed in) rather than
      `(rf)`
      - the completion arity is only called once (use of `(completing rf)` and
      the reducing function in the subsequent queries is just `([acc] acc)`
      and does not call `(rf acc)`. Remember this is just on the lower
      reducing function and all of the takes, formats, etc _above_ it will
      have the completion arity called because we are using transduce. The
      completion arity is what takes the volatile rows and row counts and
      actually nests them in the `{:data {:rows []}` structure. Without
      calling that once (and ONLY once) you end up with no actual
      results. they are just in memory.
      Unverified
      a15fc4ea
  8. May 30, 2022
  9. May 27, 2022
  10. May 26, 2022
  11. May 25, 2022
    • Case Nelson's avatar
      Persist model integration changes for front end (#22933) · 4a567c28
      Case Nelson authored
      * Add can-manage to databases to help front-end
      
      * Only return persisted true when the state agrees
      
      * Add event for persisted-info to catch new models and turn on peristence when appropriate
      
      * Fix bad threading
      
      * Add comment about the case being detected to add the persisted-info
      
      * Fix pre-existing bug in expand-db-details where a function was used as the condition
      Unverified
      4a567c28
    • metamben's avatar
      Indicate field for database error (#22804) · 3ecc3833
      metamben authored
      
      * Indicate field responsible for database error
      
      * Implement new interface
      
      * Make error showing logic a little easier to understand
      
      * Show individual field error in database form
      
      * Add backend tests for connection errors when adding a database
      
      * Add new form error component
      
      * Make database form error message pretty
      
      * Make main database form error message field bold
      
      * Change create account database form according to feedback
      
      * Fix failed E2E tests from UI changed.
      
      * Make it easier to tree-shake "lodash"
      
      * Change according to PR review
      
      * Cleanup + remove FormError (to be included in a separate PR)
      
      Co-authored-by: default avatarMahatthana Nomsawadi <mahatthana.n@gmail.com>
      Unverified
      3ecc3833
    • Anton Kulyk's avatar
      Respect advanced permissions for model persistence (#22860) · 31a1f36b
      Anton Kulyk authored
      * Expose `persisted-models-enabled` to authenticated
      
      * Remove `isAdmin` check for showing peristence toggle
      Unverified
      31a1f36b
  12. May 24, 2022
    • Case Nelson's avatar
      Put model refers on their own line in cmd.copy (#22913) · 56a8eb0e
      Case Nelson authored
      Since we have linters that force the refers to be both sorted and max
      lined, adding new models in the middled forced people to realign the whole refer vector.
      
      Having run into this twice, if we keep refers each on their own line the
      initial add, as well as merges will both be much less painful.
      Unverified
      56a8eb0e
    • Case Nelson's avatar
      Add anchor time to persistence schedule (#22827) · dbf40b72
      Case Nelson authored
      * Add anchor time to persistence schedule
      
      Introduces a new public-setting persisted-model-refresh-anchor-time,
      representing the time to begin the refresh job. Defaults to midnight
      "00:00".
      
      Also peg the cron jobs to run in the first of: reporting timezone, system timezone, or
      UTC. This means that if a user needs the refresh to be anchored consistently to a certain time in a certain place, they need to update reporting timezone to match.
      
      * Add tests for anchor time
      
      * Force anchor-time to midnight if refresh hours is less than 6
      
      * Need to check that hours was passed in
      
      * Fix ns lint
      
      * Add tests to ensure timezone is being set on trigger
      Unverified
      dbf40b72
    • Noah Moss's avatar
  13. May 23, 2022
    • dpsutton's avatar
      Fix flaky test in randomized schedules (#22865) · 72a3cf66
      dpsutton authored
      we assert that the schedule gets randomized and it is no longer one of
      the default schedules. We need to make sure that it cannot return one of
      the default schedules in that case :). Low probability (1 in 58) but
      with 28 test suites run for each commit it starts to add up and becomes
      Flaky™
      Unverified
      72a3cf66
    • dpsutton's avatar
      Put `default-query-constraints` behind a function for setting (#22792) · b08cbcca
      dpsutton authored
      Settings require db access so cannot be called at read/require time. Not
      until we have initialized the db are setting values possible to be read.
      Unverified
      b08cbcca
    • dpsutton's avatar
      Handle `nil` cache during initial cache population (#22779) · 6ca66e6e
      dpsutton authored
      * Handle `nil` cache during initial cache population
      
      this bug only happens soon after startup. The general idea of the cache
      is that we repopulate if needed, and other threads can use the stale
      version while repopulating. This is valid except at startup before the
      cache has finished populating. The `(setting.cache/cache)` value will
      still be nil to other threads while the first thread populates it.
      
      this is a race but sounds like an unlikely one to hit in practice. But
      we have a special case: `site-url`. We have middleware that checks the
      site-url on every api request and sets it if it is nil. So if an
      instance gets two requests before the cache has been populated, it will
      set it to whatever value is in the headers, overriding whatever has been
      set in the UI.
      
      The following snippet was how i diagnosed this. But to simulate startup
      you can just `(require 'metabase.models.setting.cache :reload)` to get
      back to an "empty" cache and let the threads race each other. I also put
      high contention on reloading by dropping the millisecond cache update
      interval to 7 milliseconds. But this refresh interval does not matter:
      we just fall back to the old version of the cache. It is only the
      initial cache population that is using `nil` as a current cache.
      
      ```clojure
      public-settings=> (let [mismatch1 (atom [])
                              mismatch2 (atom [])
                              iterations 100000
                              latch (java.util.concurrent.CountDownLatch. 2)
                              nil-value (atom [])]
                          (future
                            (dotimes [_ iterations]
                              (let [value (site-url)
                                    cache (metabase.models.setting.cache/cache)]
                                (when (not= value "http://localhost:3000")
                                  (swap! mismatch1 conj value))
                                (when (not= (get cache "site-url") "http://localhost:3000")
                                  (swap! nil-value conj cache))))
                            (.countDown latch))
                          (future
                            (dotimes [_ iterations]
                              (let [value (site-url)
                                    cache (metabase.models.setting.cache/cache)]
                                (when (not= value "http://localhost:3000")
                                  (swap! mismatch2 conj value))
                                (when (not= (get cache "site-url") "http://localhost:3000")
                                  (swap! nil-value conj cache))))
                            (.countDown latch))
                          (.await latch)
                          (def nil-value nil-value)
                          [(count @mismatch1) (take 10 @mismatch1) (count @mismatch2) (take 10 @mismatch2)])
      [0 () 1616 (nil nil nil nil nil nil nil nil nil nil)]
      ```
      
      * Don't attempt to get setting values from db/cache before db ready
      
      * We check `db-is-setup?` above this level
      
      db check has to happen higher up at `db-or-cache-value` as neither the
      db will work, nor the cache based on selecting from the db, if the db is
      not set up.
      
      * Remove some of the nested whens (thanks howon)
      
      Lots of nesting due to requiring and checking for nil of the required
      var. This can never really be nil but just to overly cautiously guard,
      put in an or with the `(constantly false)`. Combine the two predicates
      into a single `and` and then swap our homegrown `(when (seq x) x)` for
      the equivalent `(not-empty x)`.
      Unverified
      6ca66e6e
  14. May 19, 2022
  15. May 18, 2022
  16. May 17, 2022
    • adam-james's avatar
      Whitelabelling users font family font setting (#22791) · 856c46e2
      adam-james authored
      
      * Add font utilities to derive list of available fonts from font dirs
      
      This PR adds two defsettings to facilitate Enterprise users' ability to set the font for their instance. The
      'available-fonts' can be grabbed by the frontend to populate the dropdown menu. The 'application-font' setting can be
      written by the frontend to save the chosen font.
      
      * Adding some fonts and font selector
      
      * Send  (("dirname", "font name"),) to frontend
      
      Instead of only sending a list of names, send a tuple of `[directory-name, Display Name]`
      
      * Adding half of the fonts
      
      * Font list derived from font directory/font file names
      
      This is an attempt to derive the 'Display Name' of a font by comparing the directory name to the file names. '_' is
      replaced with spaces, but '-' exists in the 'Lato' font directory name, and we may have naming confusion in the
      future. Basically, I want to be liberal with what font directory/file names we can receive.
      
      Not certain this is the right approach yet, but it will work and correctly provides names based on the fonts available.
      
      * Adding remaining fonts
      
      * Using Default Font family in visual components
      
      * Small change to fix some lint warnings
      
      * Removing Noto Sans JP
      
      * Move logging
      
      * Alter hardcoded font path
      
      * Added cypress test
      
      * Change Cypress Test to look for correct text
      
      * Added unit tests for fonts.clj, and simplified creating display name
      
      The display name approach is simple -> underscores become spaces, and names are split on dashes/first word is taken
      from the split.
      
      This might not be robust to future font installations, but will be a good v1
      
      * adding visual test
      
      * fixing cypress test
      
      * Actually add the unit tests
      
      * Fix lint problems
      
      * Add setter to application-font to prevent invalid font setting
      
      * Addressing PR review points
      
      - fixed 'normalize-font-dirname' description
      - 'contains-font-file?' returns true/false
      - 'available-fonts-test' now properly uses 'is'
      
      * Make font-path private
      
      * Lint error.
      
      * Modified fonts to use java.io/resource to hopefully work in a jar
      
      * Simplified Lato folder name
      
      * PR Cleanup
      
      * Address some backend feedback
      
      * Lint error from reflection with .toString on a path
      
      I removed the explicit .toString inside the lamdba functino and str/includes still works.
      
      * no longer need u.files in test ns
      
      Co-authored-by: default avatarNick Fitzpatrick <nick@metabase.com>
      Unverified
      856c46e2
    • Case Nelson's avatar
      Add extra logging for persisted models (#22784) · bf5ca940
      Case Nelson authored
      Add inidividual logs for each model.
      Add final log that includes successes and errors.
      Unverified
      bf5ca940
    • metamben's avatar
      Prevent do-with-timeout from throwing Throwable return values (#22728) · 3521de88
      metamben authored
      Since the documentation of do-with-timeout doesn't mention that return
      values that are instances of Throwable will be thrown instead of being
      returned as other values, it's better not to do this.
      Unverified
      3521de88
    • Bryan Maass's avatar
      Revert backend api actions (#22755) · 3acbe649
      Bryan Maass authored
      * Revert "Revert new precondition"
      
      This reverts commit 1f97a4d6ccfdaa92ff82fd0256f08f3512c3c5e7.
      
      * Revert "plug in and alter tests to use with-actions-test-data"
      
      This reverts commit 2f33cb47b700b1da106adf54d41928c5956978d0.
      
      * Revert "Add Database-local Setting for enabling Actions for a specific Database"
      
      This reverts commit a52f8d0a478a31bb1f1b4a6693abf8f8e658d9cf.
      
      * Revert "Make sure api.actions is loaded"
      
      This reverts commit 7d7b913a7a0d64cbe2b6907f389d994a1130c0f5.
      
      * Revert "Add `experimental-enable-actions` feature flag (#22559)"
      
      This reverts commit 9b313752.
      
      * Revert "SQL JDBC Delete Row Action (#22608)"
      
      This reverts commit d6864f38.
      
      * Remove actions api docs
      
      * Revert "Destructible version of `test-data` dataset for testing writeback actions (#22692)"
      
      This reverts commit 15c57d9a.
      
      * keep around a few misc improvements
      
      * removes precondition
      
      - {:pre [(nil? (namespace symb))]}
      
      * various fixes that needn't be reverted
      Unverified
      3acbe649
    • Howon Lee's avatar
  17. May 16, 2022
    • Case Nelson's avatar
      Persist auto enable (#22756) · c5a31c48
      Case Nelson authored
      
      * Auto enable persistence for models
      
      When persistence is turned on for a db, we want to enable persistence
      caching for all models in the db.
      
      We do this by finding any models without a PersistentInfo at the top of
      the scheduled refresh task and creating one that will get picked up by
      the refresh.
      
      This necessitated introducing another "off" state on PersistedInfo that
      will get set from the front end, manually disabling persistence on a
      model.  This turns PersistedInfo into a marker so that when the refresh
      task runs again, these models will not be turned back on.
      
      The prune job will prune "off" or "deletable" PersistedInfo. Since we don't
      have a second "off-ing" state; the prune job will "drop if exists" the
      cache table each time. This may need to change.
      
      * Cherry-pick persist-refresh changes from persist-refresh-fail-email
      
      * Ready models when enabling persistence on db
      
      * Handle automatic model persistence in Tools table
      
      * Address review: insert-many instead of  doseq insert
      
      Co-authored-by: default avatarAnton Kulyk <kuliks.anton@gmail.com>
      Unverified
      c5a31c48
Loading