Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Apr 04, 2024
    • metabase-bot[bot]'s avatar
      :robot: backported "Creator sentiment emails iterate" (#40925) · c10232a2
      metabase-bot[bot] authored
      
      * Cleanup creator sentiment and follow up email code (#39016)
      
      * creator sentiment cleanup
      
      * change setting to be survey-enabled
      
      * fix test
      
      * change send-creator-sentiment-emails
      
      * Creator sentiment emails iterate (#40922)
      
      * Send creator emails every week
      
      mod the email by 52 instead of by 12 and use the current week as an
      anchor
      
      ```clojure
      user=> (require '[java-time.api :as t])
      nil
      user=> (let [every-day (take 1000 (t/iterate t/plus (t/local-date 2024 1 1) (t/days 1)))
                   f         (fn [d]
                               (let [wf (java.time.temporal.WeekFields/of (java.util.Locale/getDefault))]
                                 (.get d (.weekOfWeekBasedYear wf))))]
               (sort (keys (frequencies (map f every-day)))))
      (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
       22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
       40 41 42 43 44 45 46 47 48 49 50 51 52)
      ```
      
      ```clojure
      user=> (metabase.util.cron/cron-string->schedule-map "0 0 2 ? * 7")
      {:schedule_minute 0,
       :schedule_day "sat",
       :schedule_frame nil,
       :schedule_hour 2,
       :schedule_type "weekly"}
      ```
      
      * fix typo in creator sentiment email, clean up a bit
      
      typo: had `:verison` instead of `version`.
      
      And while I'm in there, make the payload all in one file and have the
      email namespace json->bytes->b64-encode so the data is all in one place
      and the setting that allows it to be present or not is co-located.
      
      ---------
      
      Co-authored-by: default avatarJerry Huang <34140255+qwef@users.noreply.github.com>
      Co-authored-by: default avatardpsutton <dan@dpsutton.com>
      Unverified
      c10232a2
  2. Mar 27, 2024
  3. Mar 14, 2024
  4. Mar 04, 2024
  5. Feb 13, 2024
  6. Feb 06, 2024
    • Cal Herries's avatar
      Unverified
      a9012596
    • lbrdnk's avatar
      Mongo java driver upgrade (#38017) · 113c0558
      lbrdnk authored
      
      * tmp: patched monger for 4.11.1 mongo java driver
      
      * tmp: Update monger utils
      
      Aggregation probably wont work now, but we are not using those from monger anyway. With this change in place I'm able to load needed namespaces and create test-data dataset successfully.
      
      Commit contains lot of condo errors that should be resolved while porting.
      
      * WIP: Monger removed in favor of java driver wrapper
      
      * Update java driver wrapper
      
      * Update srv test
      
      * Update comments
      
      * Use non keywordized run-command for `dbms-version`
      
      * Fix according to e2e tests
      
      * Cleanup
      
      * Separate `java-driver-wrapper` into multiple namespaces
      
      * Fix semantic type inference for serialized json
      
      * Fix options for run-command
      
      * Cleanup json namespace
      
      * Cleanup conversion namespace
      
      * Cleanup operators
      
      * Update kondo in operators ns
      
      * Cleanup connection namespace
      
      * Cleanup mongo namespace
      
      * Cleanup util namespace
      
      * Add todo
      
      * Move session related code to util
      
      * Cleanup database namespace
      
      * Update docstring for conn string generation
      
      * Update docstrings
      
      * Update tests
      
      * Update linter for with macros
      
      * Update modules/drivers/mongo/src/metabase/driver/mongo/connection.clj
      
      Co-authored-by: default avatarmetamben <103100869+metamben@users.noreply.github.com>
      
      * Update modules/drivers/mongo/src/metabase/driver/mongo/connection.clj
      
      Co-authored-by: default avatarmetamben <103100869+metamben@users.noreply.github.com>
      
      * Use transient in from-document for building a map
      
      Co-authored-by: default avatarmetamben <103100869+metamben@users.noreply.github.com>
      
      * Update can-connect to use let form
      
      - Avoid nested `with-mongo-database` call.
      - Avoid creating a set used for searching for database name.
      
      * Remove redundant set use from describe-database
      
      * Change from-document keywordize default to false
      
      * Remove log message translation
      
      * Update maybe-add-ssl-context-to-builder! to always return builder
      
      * Indent from-document
      
      * Remove redundant ConvertToDocument extensions
      
      * Use oredered-map in do-find's sort
      
      * Use ex-info in details-normalized
      
      * Add imports and update comment in execute ns
      
      * Update fixme comment
      
      * Pass opts instead of a selection to from-document in do-find
      
      * Avoid unnecessary double dot call
      
      * Update connection test ns according to review remarks
      
      * Make tests parallel
      
      * Docstring update
      
      * Update docstring
      
      ---------
      
      Co-authored-by: default avatarmetamben <103100869+metamben@users.noreply.github.com>
      Unverified
      113c0558
  7. Feb 05, 2024
  8. Jan 29, 2024
  9. Jan 22, 2024
  10. Jan 19, 2024
  11. Jan 18, 2024
  12. Jan 16, 2024
  13. Jan 03, 2024
  14. Dec 27, 2023
  15. Dec 21, 2023
  16. Dec 13, 2023
  17. Dec 11, 2023
    • bryan's avatar
      adding perm-graph filtering on db or group id (#36543) · 19401c1b
      bryan authored
      * a more refined first crack at adding perm-graph access for db or group id
      
      * new routes: filter data-perm-graph on db or group
      
      add tests
      
      * remove inline defs
      
      * code review responses
      
      * fix typo
      
      * fix the other tests
      
      * update the tests to use the right malli schema
      
      * reuse private vars in tests
      
      * pull graph checker into test util ns and apply it
      Unverified
      19401c1b
  18. Dec 07, 2023
    • Mark Bastian's avatar
      Refactoring formatting code for common utility (#36490) · 58f22594
      Mark Bastian authored
      * Apply column formatting to CSV exports
      
      This PR applies the same formatting logic to CSV exports as it does to pulse bodies (e.g. HTML).
      
      Formerly, there were two related formatting paths:
      - `metabase.query-processor.streaming.common/format-value`, which is from a protocol that takes an object and returns a string. It is what was used to export csv data. It applies no actual formatting, only converts objects to strings.
      - `metabase.pulse.render.body/get-format`, which builds a formatter from an optional time zone, column metadata, and visualization settings. This formatter takes a string and formats it. It was only used for rendering inline artifacts, such as embedded HTML in emails.
      
      The first function is insufficient to format row data as it is unaware of the column metadata and viz settings. We need to use that data everywhere data is exported in a uniform way.
      
      The solution is to lift `get-format` from `metabase.pulse.render.body` to a more common location (`metabase.pulse.render.common` in this PR step, but needs to be moved out of the pulse code to be a more shared concern) and use it everywhere artifacts are generated.
      
      For csv export, this was achieved as follows in `metabase.query-processor.streaming.csv`:
      
      ```clojure
      (defmethod qp.si/streaming-results-writer :csv
        [_ ^OutputStream os]
        (let [writer     (BufferedWriter. (OutputStreamWriter. os StandardCharsets/UTF_8))
              formatters (atom {})]
          (reify qp.si/StreamingResultsWriter
            (begin! [_ {{:keys [ordered-cols results_timezone]} :data} viz-settings]
              (swap! formatters (constantly (zipmap
                                              ordered-cols
                                              (map (fn [col]
                                                     (p.common/get-format results_timezone col viz-settings))
                                                   ordered-cols))))
              (csv/write-csv writer [(map (some-fn :display_name :name) ordered-cols)])
              (.flush writer))
      
            (write-row! [_ row _row-num cols {:keys [output-order]}]
              (let [[ordered-row
                     ordered-cols] (if output-order
                                     (let [row-v  (into [] row)
                                           cols-v (into [] cols)]
                                       [(for [i output-order] (row-v i))
                                        (for [i output-order] (cols-v i))])
                                     [row cols])]
                (csv/write-csv writer [(map (fn [c r]
                                              (let [formatter (@formatters c)]
                                                (formatter (common/format-value r))))
                                            ordered-cols ordered-row)])
                (.flush writer)))
      
            (finish! [_ _]
              ;; TODO -- not sure we need to flush both
              (.flush writer)
              (.flush os)
              (.close writer)))))
      ```
      
      The formatters for each column are build in the `begin!` function and then looked up in each `write-row!`.  The existing `format-value` is used to produce a string then passed into our looked up column formatter.
      
      Note that the new unit tests simulate a pulse and grab the provided temp files as attachments and analyzes those for correctness. This should work in a CI environment so long as the test process has permission to both write attachments to the temp directory and read those attachments back out. Also note that these tests can be slow (but not terribly so).
      
      Primary changes:
      - `metabase.email.messages` - fix spelling
      - `metabase.pulse.render.body` - move `get-format` out of this ns
      - `metabase.pulse.render.common` - move `get-format` into this ns
      - `metabase.query-processor.streaming.csv` - new logic to apply pulse renderer formatting to csv exports
      - `metabase.pulse.pulse-integration-test` - adding unit tests
      
      One TODO before a final commit of this PR is to move the `get-format` logic out of a pulse ns into something more general. Ultimately, it would be nice if this was a common capability used by both BE and FE.
      
      * Refactoring formatting code for common utility
      
      This PR refactors `metabase.pulse.render.common` to `metabase.formatter` as this is code we want applied to several locations, not just in pulses. It also updates references to these nses and the consistent alias.
      
      A key observation of this formatting code, and reason for the refactor, is that it is "metabase-aware" in that it takes into account metadata columns and visualization settings when building formatters rather than just being a simple generic date or number formatter. This is a common code path that should be used any time we are rendering static assets and could potentially be used as common FE code with future development.
      
      Moves:
      - `metabase.pulse.render.common` to `metabase.formatter`
      - `metabase.pulse.render.datetime` to `metabase.formatter.datetime`
      - `metabase.pulse.render.common-test` to `metabase.formatter-test`
      - `metabase.pulse.render.datetime-test` to `metabase.formatter.datetime-test`
      
      * Ordering consistent aliases in kondo config
      
      * Rebase fix to use formatter ns in streaming.csv
      
      * Adding `metabase.formatter` require.
      
      * Updating require alias on new test.
      Unverified
      58f22594
  19. Dec 05, 2023
  20. Dec 01, 2023
    • Mark Bastian's avatar
      Document and test adding `results_metadata` to qp middleware for static viz (#36245) · 06848968
      Mark Bastian authored
      * Adding results_metadata to qp middleware
      
      This adds two middlewares to `metabase.query-processor.middleware.results-metadata`:
      - `inject-result-metadata`: Adds `result_metadata` from the context, if present, into the query map during preprocessing.
      - `merge-existing-metadata`: A post-processing middleware that merges `results_metadata` (if present) from the query data into the metadata.
      
      This has the effect of preserving existing metadata, which is particularly important for user-curated metadata, such as semantic type overrides.
      
      This may also facilitate addressing the following TODO in the same ns:
      
      ```
      ;; 1. Is there some way we could avoid doing this every single time a Card is ran? Perhaps by passing the current Card
      ;;    metadata as part of the query context so we can compare for changes
      ```
      
      * Preserving result_metadata in query processor
      
      This PR uses `qp.util/combine-metadata` to simplify the combination of provided `:result_metadata` with computed metadata. It also updates pulse rendering code to use this new code path and adds and updates tests to vet this logic.
      
      * fmt
      
      * Modifying PR to document and use existing fns
      
      The current qp pipeline will correctly propagate custom metadata if a question is derived from a model or if the model is processed like so:
      
      ```clojure
      (qp/process-query
        (assoc-in dataset_query [:info :metadata/dataset-metadata] result_metadata))
      ```
      
      This PR fixes rendering unit tests to clarify this and provides a much better working example in the `dev.render-png` ns.
      
      * Updating test
      
      * Adding Pulse w/Metadata Tests
      
      This pr adds a unit test that constructs a dashboard and simulates a pulse email, then checks the resulting HTML for correctness.
      
      A dependency on the [hickory](https://github.com/clj-commons/hickory) library was added to dev. This allows us to easily parse HTML into data and navigate the parse tree.
      
      Note that it does NOT check the attachments for formatting correctness.
      
      Spelling and import consistency changes were also made.
      Unverified
      06848968
  21. Nov 16, 2023
  22. Nov 15, 2023
  23. Nov 09, 2023
  24. Nov 07, 2023
  25. Oct 31, 2023
  26. Oct 24, 2023
  27. Oct 12, 2023
  28. Sep 27, 2023
  29. Sep 26, 2023
    • Cam Saul's avatar
      Remove deprecated `current-db-time` method (#32686) · 47a52b57
      Cam Saul authored
      
      * Remove deprecated current-db-time method
      
      * Address PR feedback
      
      * sync-timezone! should handle java.time.ZoneId and java.time.ZoneOffset
      
      * Allow time zone offsets as zone IDs
      
      * Fix tests and suppress clj-kondo on a few forms
      
      * Suppress warning on one more occurrence of db-default-timezone
      
      * Map offset Z to time zone UTC when syncing
      
      * Accept zone offsets, zone ids and strings for h2 and oracle
      
      * Mark snowflake as not supporting report timezone
      
      * Revert "Mark snowflake as not supporting report timezone"
      
      This reverts commit 70b15a60334fda707d695bc6c01621961f1c5aa8.
      
      * Fix report-timezone handling
      
      * Numeric timezone
      
      ---------
      
      Co-authored-by: default avatarmetamben <103100869+metamben@users.noreply.github.com>
      Co-authored-by: default avatarTamás Benkő <tamas@metabase.com>
      Unverified
      47a52b57
  30. Sep 21, 2023
  31. Sep 20, 2023
    • Mark Bastian's avatar
      X-ray improvements for clarity and literacy (#33815) · 764fcd06
      Mark Bastian authored
      
      * Fix clj-kondo
      
      was getting an error:
      
      ```clojure
      user=> (require ’[clj-kondo.hooks-api :as hooks])
      Syntax error compiling at (clj_kondo/impl/analysis/java.clj:61:6).
      Unable to find static field: ASM9 in interface org.objectweb.asm.Opcodes
      ```
      
      checking where the bad version comes from:
      
      ```clojure
      user=> (io/resource "org/objectweb/asm/Opcodes.class")
              "0x37e9d2e2"
              "jar:file:/Users/dan/.m2/repository/org/ow2/asm/asm-all/5.2/asm-all-5.2.jar!/org/objectweb/asm/Opcodes.class"]
      ```
      
      and running
      
      ```
      > clj -X:deps tree :aliases '[:dev :ee :ee-dev :drivers :drivers-dev :alpha :socket :morse :reveal]' > deps
      
      ❯ grep asm deps
        X org.ow2.asm/asm 9.4 :superseded
        . org.ow2.asm/asm-all 5.2
            . org.ow2.asm/asm 9.5 :newer-version
            . org.ow2.asm/asm-commons 9.5
              . org.ow2.asm/asm 9.5
              . org.ow2.asm/asm-tree 9.5
                . org.ow2.asm/asm 9.5
          X org.ow2.asm/asm 9.2 :older-version
      ```
      
      The 9.4 and 9.5s are all close to each other. But the asm-all 5.2 is far
      too old. That's brought in by eastwood. So i'll exclude it and see if it
      works
      
      * Can't throw error in api endpoint for linked entities x-ray
      
      `create-linked-dashboard` creates a dashboard if there are no linked
      entities, so this is unreachable code
      
      * unify card and query code pathways.
      
      now it's quite clear how card and query diverge:
      
      ```clojure
      (defmethod automagic-analysis Card
        [card {:keys [cell-query] :as opts}]
        (let [root     (->root card)
              cell-url (format "%squestion/%s/cell/%s" public-endpoint
                               (u/the-id card)
                               (encode-base64-json cell-query))]
          (query-based-analysis root opts
                                (when cell-query
                                  {:cell-query cell-query
                                   :cell-url   cell-url}))))
      
      (defmethod automagic-analysis Query
        [query {:keys [cell-query] :as opts}]
        (let [root       (->root query)
              cell-query (when cell-query (mbql.normalize/normalize-fragment [:query :filter] cell-query))
              opts       (cond-> opts
                           cell-query (assoc :cell-query cell-query))
              cell-url   (format "%sadhoc/%s/cell/%s" public-endpoint
                                 (encode-base64-json (:dataset_query query))
                                 (encode-base64-json cell-query))]
          (query-based-analysis root opts
                                (when cell-query
                                  {:cell-query cell-query
                                   :cell-url   cell-url}))))
      ```
      
      * Adding frequencies to a test to prevent non-deterministic behavior
      
      * Fixing linter check and ordering issue in test.
      
      * Adding TODO future task to replace /rule/ with /dashboard-template/ in paths.
      
      * Added large ns block to metabase.automagic-dashboards.core
      
      * Fixing spelling mistakes
      
      * Some docstrings and renaming for clarity.
      
      * Added ->field and ->root (covering source) tests
      
      * TODO - Fix minumum spelling error on standalone PR.
      
      * Breaking up tests to understand what is on with bulk failures
      
      * Broke make-cards into several logical steps so that the code is more readable. Added tests for each stage.
      
      * Updating references to rules.clj for i18n generation.
      
      * Inlined make-context into apply-dashboard-template. This makes the steps of creating dimensions, metrics, and filters more explicit. ATM we still need to resolve the logic around inject-root as this has dependence on the context and not the individual dimensions.
      
      * put build stuff on classpath for lsp
      
      * calculate base-context once before template loop
      
      * Removed the inject-root multimethod in favor of simple functions. Introducing the idea of removing metric and filter candidates that claim dimensions that don't exist.
      
      * Adding logic to pre-emptively remove unsatisfied metrics, filters, and dimensions.
      
      In `card-candidates` the check:
      
      ```
      (and
        (every? context-dimensions (map ffirst card-dimensions))
        (every? context-metrics card-metrics)
        (every? context-filters card-filters))
      ```
      
      is now performed to ensure that the card is satisfied before moving forward.
      
      * Renamed common-dimensions, common-metrics, and common-filters to satisfied-* for clarity. In card-candidates also changed context-* and card-* bindings to available-* and required-* for clarity.
      
      * Teasing out the generated dimensions, metrics, and filters from the big-ball-of-mud context. `make-cards` now takes the base context as well as the available computed dimensions, metrics, and filters as a separate argument. Additionally, tests were added for the `(comp #(filter-tables % tables) dashboard-templates/->entity)` branch of `some-fn` in `potential-card-dimension-bindings`. However, I am not confident that this is ever called outside of the tests. In order to be called, the entity type for the table has to be a named dimension in a card, which I can't find any examples of in the templates.
      
      * Pushing mega-context down to return value of apply-dashboard-template. Just need to take it apart where it is used downstream.
      
      * apply-dashboard-template now just returns generated values and does not build up the base context.
      
      * Adding tests for cases in which dimensions are defined in a native query on the dashcard template.
      
      The only case where this happens is in the resources/automagic_dashboards/table/example.yaml and,
      prior to this PR, had no test coverage.
      
      It does add some weird complexities to the system (including bringing in dimensions from a different
      approach vector) so perhaps we want to re-evaluate if we want this as a feature or not.
      
      ---------
      
      Co-authored-by: default avatardan sutton <dan@dpsutton.com>
      Unverified
      764fcd06
  32. Sep 14, 2023
  33. Sep 07, 2023
  34. Sep 06, 2023
    • Mark Bastian's avatar
      Indexed Entity X-Rays (#33656) · 191b8165
      Mark Bastian authored
      
      # Primary objective: From an indexed entity, located the items referencing that entity and make a dashboard for that entity. This will make several dashboards, one for each entity, so the strategy is to combine these all into a single dashboard with tabs. If there are no linked entities as simple dashboard indicating that nothing interesting is linked. If there is a single linked entity we create a single dashboard (no tab) but with the same linked entity title.
      
      A few of the main nses that were affected and other bullet point changes:
      
      * metabase.api.automagic-dashboards - The vast majority of the work to create the unified dashboard is found here.
      * metabase.automagic-dashboards.core - Docs, cleanup, and destructuring for clarity
      * metabase.api.search - Adding fix to realize search results
      * metabase.automagic-dashboards.filters - Fix to build-fk-map so that parameters show up on model x-rays
      * metabase.automagic-dashboards.populate - Fix typo
      * metabase.api.search-test - Unit tests for search fix
      * Brought over tab saving of transient x-ray dashboard code to save-transient-dashboard!
      
      ---------
      
      ## Overall summary
      
      The primary entry point to these changes can be found in `metabase.api.automagic-dashboards` at:
      
      ```clojure
      (api/defendpoint GET "/model_index/:model-index-id/primary_key/:pk-id" ...
      ```
      
      A simple reproduction of dashboard creation from indexed entities is shown here:
      
      ```clojure
      (let [model-index-id 54 ;; This and pk-id will be specific to some indexed entity in your system
            pk-id          1]
        (binding [api/*current-user-permissions-set* (delay #{"/"})
                  api/*current-user-id*              1]
          (let [model-index       (t2/select-one ModelIndex :id model-index-id)
                model             (t2/select-one Card (:model_id model-index))
                model-index-value (t2/select-one ModelIndexValue
                                                 :model_index_id model-index-id
                                                 :model_pk pk-id)
                linked            (#'api.magic/linked-entities
                                    {:model             model
                                     :model-index       model-index
                                     :model-index-value model-index-value})
                dashboard         (#'api.magic/create-linked-dashboard
                                    {:model             model
                                     :linked-tables     linked
                                     :model-index       model-index
                                     :model-index-value model-index-value})]
            dashboard)))
      ```
      
      ---------
      
      ## Fixed the query filter in `metabase.api.automagic-dashboards` so that `create-linked-dashboard` doe not produce bad queries.
      
      We're no longer making joins back to the model's underlying
      table. Recognizing that we were joining on linked table's fk to the
      model's underlying table's pk and then searching where pk = <pk-value>,
      we can just filter where fk = <pk-value>, omit the join.
      
      So these tests just grab all of the linked tables and assert that one of
      those filters is found.
      
      eg, suppose a model based on products. the linked tables are orders and
      reviews. And rather than the queries:
      
      ```sql
      select o.*
      from orders o
      left join products p
      on p.id = o.product_id
      where p.id = <pk-value>
      ```
      
      we can just do
      
      ```sql
      select *
      from orders
      where product_id = <pk-value>
      ```
      
      And similar for reviews. And for each query in the dashboard we're
      looking for one of the following:
      
      ```clojure
      , #{[:= $orders.product_id 1] [:= $reviews.product_id 1]}
      ```
      
      ---------
      
      ## Handle expression refs in indexed-models
      
      Custom columns look like the following:
      
      ```clojure
      {:expressions {"full-name" [:concat <first-name> <last-name>]}
      ```
      
      To index values, we need a sequence of primary key and the associated
      text value. So we select them from a nested query on the model. But the
      model's way to refer to an expression and queries _on_ the model are
      different. To the model, it's an expression. To queries based on the
      model, it's just another field.
      
      And have field refs in the result_metadata `[:expression
      "full-name"]`. But when selecting from a nested query, they are just
      string based fields: `[:field "full-name" {:base-type :type/Text}]`.
      
      old style query we issued when fetching values:
      
      ```clojure
      {:database 122
       :type :query
       :query {:source-table "card__1715"
               :breakout [[:field 488 {:base-type :type/Integer}]
                          [:expression "full name"]] ;; <-- expression
               :limit 5001}}
      ```
      
      new style after this change:
      
      ```clojure
      {:database 122
       :type :query
       :query {:source-table "card__1715"
               :breakout [[:field 488 {:base-type :type/Integer}]
                          [:field "full name" {:base-type :type/Text}]]
               :limit 5001}}
      ```
      
      ---------
      
      ## Normalize field references
      
      The schema was expecting valid mbql field refs (aka vectors and
      keywords) but was getting a list and string (`[:field 23 nil]`
      vs ("field" 23 nil)`). So normalize the field ref so we can handle the
      stuff over the wire
      
      this nice little bit of normalization lived in models.interface and
      comped two functions in mbql.normalize. A few places reused it so moved
      it to the correct spot.
      
      * Better error message in `fix-expression-refs`
      
      handles [:field ...] and [:expression ...] clauses. Seems unlikely that
      aggregations will flow through here as that's not a great way to label a
      pk. But i'll add support for that in a follow up
      
      * Move `normalize-field-ref` below definition of `normalize-tokens`
      
      `normalize-tokens` is `declare`d earlier, but we aren't using the var as
      a var, but in a def we derefence it. But that's nil as it hadn't been
      defined yet. Caused lots of failures downstream, including in cljs land
      
      ```clojure
      user=> (declare x)
      user=> (def y x) ;; <- the use in a def gets its current value
      user=> (def x 3) ;; <- and it's not reactive and backfilling
      user=> y
      ```
      
      * Don't capture `declare`d vars in a def
      
      need late binding. when the function is called, those vars will have a
      binding. But if we use a `def`, it grabs their current value which is an
      unbound var.
      
      ---------
      
      Co-authored-by: default avatardan sutton <dan@dpsutton.com>
      Co-authored-by: default avatarEmmad Usmani <emmadusmani@berkeley.edu>
      Unverified
      191b8165
    • Braden Shepherdson's avatar
  35. Sep 05, 2023
    • Cam Saul's avatar
      Migrate `metabase.sync*` to Malli (#33682) · e944046f
      Cam Saul authored
      * Sync use Malli
      
      * Test fixes
      
      * One last test fix :wrench:
      
      * Update Kondo config
      
      * Test fixes
      
      * Revert accidental whitespace
      
      * Fix whitespace
      
      * Fix more mystery whitespace
      
      * Remove byte order marks
      
      * Test fixes :wrench:
      Unverified
      e944046f
  36. Sep 02, 2023
    • Cam Saul's avatar
      Migrate QP/drivers code to use MLv2 metadata directly; make another 100+ tests... · 7bacd2fa
      Cam Saul authored
      Migrate QP/drivers code to use MLv2 metadata directly; make another 100+ tests parallel and shave 12 seconds off test suite (#33429)
      
      * QP MLv2 Metadata Provider follow-ons
      
      * Update lots of stuff to use MLv2 metadata instead of legacy metadata
      
      * Fix lint warnings
      
      * B I G  cleanup
      
      * Everything is neat
      
      * Mention new test pattern in driver dev changelog
      
      * Appease Cljs by renaming a bunch of namespaces
      
      * Move more stuff over
      
      * Fix kondo errors
      
      * Fix even more things
      
      * Test fixes
      
      * Fix JS export
      
      * Test fixes :wrench:
      
      * Fix final MongoDB test
      Unverified
      7bacd2fa
  37. Aug 24, 2023
    • Cam Saul's avatar
      QP: use MLv2 metadata provider. Eliminate 50% of app DB calls and improve... · a66db9e9
      Cam Saul authored
      QP: use MLv2 metadata provider. Eliminate 50% of app DB calls and improve performance by 10%+ (#33221)
      
      * QP use MLv2 metadata provider (34/2)
      
      * (22/2)
      
      * (21 / 0)
      
      * (20 / 4)
      
      * (7 / 2)
      
      * (2/0)
      
      * Cleanup; (3/0)
      
      * Last two test fixes
      
      * (36 / 2)
      
      * (8 / 1)
      
      * Reorder stuff
      
      * Test fixes :wrench:
      
      * Test fixes
      
      * Some test fixes
      
      * More test fixes :wrench:
      
      * Test fix :wrench:
      
      * MongoDB test fix
      
      * B I G  cached metadata provider performance improvements
      
      * Revert breaking change
      
      * Fix Kondo
      
      * Make sure application database metadata provider returns Database :features
      
      * Test fix :wrench:
      
      * Parallel tests for QP macroexpansion middleware
      
      * `with-current-user` is fine in parallel tests.
      
      * Add test util remap metadata providers and rework remap middleware to use mock MLv2 data
      
      * Address PR feedback
      
      * Improve unrelated flaky test
      
      * Some test improvements
      
      * Fix lint error
      
      * MLv2-ize `nest-query-test`
      
      * Fix typo in PR feedback changes
      
      * More parallelization and test fixes :wrench:
      
      * Convert lots more tests to mock metadata
      
      * Test fix? :wrench:
      
      * Fix Card update logic
      
      * Another Card update logic fix :wrench:
      
      * Another round of fixes :wrench:
      
      * Hopefully no more test fixes :wrench:
      
      * Evil test fixes :wrench:
      
      * Test fix 1
      
      * Test fix
      Unverified
      a66db9e9
Loading