Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Sep 29, 2022
  2. Sep 28, 2022
    • Ngoc Khuat's avatar
      Advanced datetime extraction (#25277) · 5a80e561
      Ngoc Khuat authored
      * Implement advanced date/time/zone manipulation, part 1
      
      Incorporate new functions into MBQL and add tests:
       - get-year
       - get-quarter
       - get-month
       - get-day
       - get-day-of-week
       - get-hour
       - get-minute
       - get-second
      
      * Fix BigQuery implementations to call extract
      
      Mark as not supported in legacy driver
      
      * Add date extraction fns for Postgres
      
      * Disable in MongoDB (for now at least)
      
      Disable in BigQuery (legacy driver)
      
      Add implementations for presto-jdbc
      
      * Misc cleanup from Jeff's PR
      
      * Update Jeff's implementation of bigquery-cloud-sqk
      
      * Reorganized tests
      
      * Mongo
      
      * Oracle
      
      * Sqlserver
      
      * Sqlite
      
      * Add casting supports for presto
      
      * Remove Jeff's implementation of presto-jdbc because  its parent is
      sql-jdbc
      
      * Update presto-jdbc tests to use the same catalog for all datasets
      
      * Add date extraction functions to the expression editor (#25382)
      
      * make sure the semantic type of aggregated columns are integer
      
      * no recursive call in annotate for date-extract func
      
      * get-unit -> temporal-extract(column, unit)
      
      * desguar nested datetime extraction too
      5a80e561
  3. Aug 24, 2022
  4. Aug 22, 2022
  5. Aug 12, 2022
    • Cam Saul's avatar
      Enable Kondo for tests part 2: enable `:unused-binding` linter and fix warnings (#24748) · adf45182
      Cam Saul authored
      * Fix some small things
      
      * Add Kondo to deps.edn to be able to debug custom hooks from REPL
      
      * Fix macroexpansion hook for with-temp* without values
      
      * Test config (WIP)
      
      * More misc fixes
      
      * Disable :inline-def for tests
      
      * More misc fixes
      
      * Fix $ids and mbql-query kondo hooks.
      
      * Fix with-temporary-setting-values with namespaced symbols
      
      * More misc fixes
      
      * Fix the rest of the easy ones
      
      * Fix hook for mt/dataset
      
      * Horrible hack to work around https://github.com/clj-kondo/clj-kondo/issues/1773 . Custom linter for mbql-query macro
      
      * Fix places calling mbql-query with a keyword table name
      
      * Fix the last few errors in test/
      
      * Fix errors in enterprise/test and shared/test
      
      * Fix driver test errors
      
      * Enable linters on CI
      
      * Enable unresolved-namespace linter for tests
      
      * Appease the namespace linter again
      
      * Test fixes
      
      * Enable unused-binding linter for test/ => 293 warnings
      
      * 259 warnings
      
      * 234 warnings
      
      * => 114 warnings
      
      * Fix the rest of the unused binding warnings in test/
      
      * Fix unused binding errors in enterprise/backend/test
      
      * Fix unused binding lint errors in driver tests
      
      * Test fix :wrench:
      
      * Assure Kondo that something is in fact used
      adf45182
    • Cam Saul's avatar
      Enable Kondo for tests (part 1) (#24736) · bc4acbd2
      Cam Saul authored
      * Fix some small things
      
      * Add Kondo to deps.edn to be able to debug custom hooks from REPL
      
      * Fix macroexpansion hook for with-temp* without values
      
      * Test config (WIP)
      
      * More misc fixes
      
      * Disable :inline-def for tests
      
      * More misc fixes
      
      * Fix $ids and mbql-query kondo hooks.
      
      * Fix with-temporary-setting-values with namespaced symbols
      
      * More misc fixes
      
      * Fix the rest of the easy ones
      
      * Fix hook for mt/dataset
      
      * Horrible hack to work around https://github.com/clj-kondo/clj-kondo/issues/1773 . Custom linter for mbql-query macro
      
      * Fix places calling mbql-query with a keyword table name
      
      * Fix the last few errors in test/
      
      * Fix errors in enterprise/test and shared/test
      
      * Fix driver test errors
      
      * Enable linters on CI
      
      * Enable unresolved-namespace linter for tests
      
      * Appease the namespace linter again
      
      * Test fixes
      bc4acbd2
  6. Aug 11, 2022
    • Cam Saul's avatar
      Enable clj-kondo for driver namespaces (#24728) · 047a336b
      Cam Saul authored
      * Enable clj-kondo for driver namespaces
      
      * Fix CI command
      
      * Fix CI config (?)
      
      * Try hardcoding the driver directories in CI config
      
      * Fix some busted stuff
      
      * Enable the redundant fn wrapper linter
      
      * Appease the namespace linter
      047a336b
  7. Aug 10, 2022
  8. Aug 04, 2022
    • metamben's avatar
      Set timezone when truncating dates (#24247) · 05b33c2e
      metamben authored
      Breaking date values into parts should happen in the time zone expected by the user and the construction of the
      truncated date should happen in the same timezone. This is especially important when the difference between the
      expected timezone and UTC is not an integer number of hours and the truncation resolution is hour. (See #11149).
      05b33c2e
    • Cam Saul's avatar
      Fix SSH tunnel leaks when testing connections (#24446) · cc632e81
      Cam Saul authored
      * Fix SSH tunnel leaks when testing connections
      
      * Appease kondo =(
      
      * Add some more NOTES
      cc632e81
  9. Aug 03, 2022
  10. Aug 02, 2022
    • dpsutton's avatar
      bump snowflake (#24480) · b5ea59aa
      dpsutton authored
      * bump snowflake
      
      * Handle Types/TIMESTAMP_WITH_TIMEZONE for snowflake
      
      Snowflake used to return columns of this type as `Types/TIMESTAMP` =
      93.
      
      The driver is registered with the legacy stuff
      
      ```clojure
      (driver/register! :snowflake,
                        :parent #{:sql-jdbc
                                  ::sql-jdbc.legacy/use-legacy-classes-for-read-and-set})
      ```
      
      causing it to hit the following codepath:
      
      ```clojure
      (defmethod sql-jdbc.execute/read-column-thunk [::use-legacy-classes-for-read-and-set Types/TIMESTAMP]
        [_ ^ResultSet rs _ ^Integer i]
        (fn []
          (when-let [s (.getString rs i)]
            (let [t (u.date/parse s)]
              (log/tracef "(.getString rs i) [TIMESTAMP] -> %s -> %s" (pr-str s) (pr-str t))
              t))))
      ```
      
      But snowflake now reports the column as `Types/TIMESTAMP_WITH_TIMEZONE`
      = 2014 in https://github.com/snowflakedb/snowflake-jdbc/pull/934/files
      we no longer hit this string based path for the timestamp with timezone
      pathway. Instead it hits
      
      ```clojure
      (defn- get-object-of-class-thunk [^ResultSet rs, ^long i, ^Class klass]
        ^{:name (format "(.getObject rs %d %s)" i (.getCanonicalName klass))}
        (fn []
          (.getObject rs i klass)))
      
      ,,,
      
      (defmethod read-column-thunk [:sql-jdbc Types/TIMESTAMP_WITH_TIMEZONE]
        [_ rs _ i]
        (get-object-of-class-thunk rs i java.time.OffsetDateTime))
      ```
      
      And `(.getObject ...)` blows up with an unsupported exception.
      
      ```java
        // @Override
        public <T> T getObject(int columnIndex, Class<T> type) throws SQLException {
          logger.debug("public <T> T getObject(int columnIndex,Class<T> type)", false);
      
          throw new SnowflakeLoggedFeatureNotSupportedException(session);
        }
      ```
      
      There seem to be some `getTimetamp` methods on the
      `SnowflakeBaseResultSet` that we could call but for now I'm just
      grabbing the string and we'll parse on our side.
      
      One style note, it is not quite clear to me if this should follow the
      pattern established in `snowflake.clj` driver of setting `defmethod
      sql-jdbc.execute/read-column-thunk [:snowflake
      Types/TIMESTAMP_WITH_TIMEZONE]` or if it should follow the legacy
      pattern of `defmethod sql-jdbc.execute/read-column-thunk
      [::use-legacy-classes-for-read-and-set Types/TIMESTAMP]`. It seems like
      almost either would be fine. Just depends on whether other legacy
      drivers might like this fix which seems possible?
      b5ea59aa
  11. Jul 23, 2022
  12. Jul 20, 2022
  13. Jul 19, 2022
    • Cam Saul's avatar
      Implement `:bulk/update` Action (backend) (#24036) · bd913c87
      Cam Saul authored
      * `:bulk/update`
      
      * Some refactoring. Add error handling and tests for `:bulk/update`
      
      * Remove unused namespace
      
      * PR feedback and some bug fixes
      
      * Make sure bulk/delete fails if you have all PK columns and also non-PK columns
      
      * Implement caching mechanism for Actions
      
      * Fix more `bulk/delete` tests for Postgres
      
      * Disable kondo error for `u/key-by` being deprecated
      
      * Fix duplicate test setup code in `bulk/delete` tests
      
      * Fix CircleCI job skipping magic: `$CIRCLE_STAGE` has been renamed to `$CIRCLE_JOB`
      
      * Fix Presto JDBC failures from `NOT NULL` column
      
      * Fix GA Card Save expected status code
      
      * Fix error message differences between Postgres versions
      
      * Update Snowflake DDL statements tests
      
      * Fix redundant `let` expressions
      
      * handle different error messages for different PG versions
      bd913c87
    • Cam Saul's avatar
      Fix CircleCI job skipping magic: `$CIRCLE_STAGE` has been renamed to `$CIRCLE_JOB` (#24128) · 95cd7f25
      Cam Saul authored
      * Fix CircleCI job skipping magic: `$CIRCLE_STAGE` has been renamed to `$CIRCLE_JOB`
      
      * Bust cache if `$CIRCLE_JOB` is unset
      
      * Print message if `$CIRCLE_JOB` is unset and we're busting the cache
      
      * Fix GoogleAnalytics status code
      95cd7f25
  14. Jul 15, 2022
  15. Jun 30, 2022
  16. Jun 24, 2022
  17. Jun 20, 2022
    • metamben's avatar
      Fix compilation of temporal arithmetic in between filters (#23292) · 001055df
      metamben authored
      Fix compilation of temporal arithmetic for BigQuery and Mongo 5+
      
      * Mongo 4 doesn't support $dateAdd so the generated filters result in an exception.
      * Support adding field to interval too (time intervals were not allowed in the first place of an addition)
      * Support temporal arithmetic with more than two operands for Mongo
      001055df
  18. Jun 14, 2022
    • Case Nelson's avatar
      Add database_required to field metadata (#23213) · d240a533
      Case Nelson authored
      * Add database_required to field metadata
      
      The purpose of this new column is meant to track if the database believes
      that the column is required during writeback. It may be that the user
      knows better but that is out of scope and would need
      another column so that sync never overwrites what a user would set.
      
      Since writeback isn't planned to be added to non transactional-dbs yet
      we do not need to calculate requiredness in other drivers.
      
      This could become more sophisticated in the future (look at triggers
      that modify the column) simply looking for not-null and non-default
      columns gets us a big win in UI.
      
      * Fixing tests
      
      * Fixing more tests
      
      * Default updates to false
      
      * Add database-required to field sync metadata test
      
      * Update driver tests
      
      * Fix tests
      
      * Changes from testing on different dbs
      
      https://docs.oracle.com/javase/7/docs/api/java/sql/DatabaseMetaData.html#getColumns(java.lang.String,%20java.lang.String,%20java.lang.String,%20java.lang.String)
      Based on these descriptions of `getColumns` and testing across dbs made
      the following changes:
      
      `COLUMN_DEF` may or may not show autoincrement defaults
      `COLUMN_DEF` may be `nil` or `"NULL"` and the docs say it may be `"null"`
      `NULLABLE` is an int value, `0` indicating not null
      
      NULLABLE and AUTOINCREMENT may be unknown (`3`, `""`) in those cases the
      field will not be marked as required.
      
      * Updating tests from changes
      
      * Fix driver tests
      
      * Cypress tests had fk field ids hardcoded
      d240a533
  19. Jun 09, 2022
    • dpsutton's avatar
      Fix mysql humanize error (#23263) · 051534fe
      dpsutton authored
      * Correctly handle fall-through case.
      
      Condp is exhaustive and throws if nothing matches
      
      ```
      mysql=> (condp = 2
                1 :nope
                3 :nope)
      Execution error (IllegalArgumentException) at metabase.driver.mysql/eval169387 (REPL:870).
      No matching clause: 2
      mysql=>
      ```
      
      and can take a single default clause at the end with no predicate
      
      ```clojure
      mysql=> (condp = 2
                1 :nope
                :default-catch-all)
      :default-catch-all
      ```
      
      This was attempted with a regex `#".*"` but this regex does not match
      everything!
      
      ```clojure
      mysql=> (driver/humanize-connection-error-message :mysql
                                                        "hello\nthere")
      Execution error (IllegalArgumentException) at metabase.driver.mysql/eval167431$fn (mysql.clj:124).
      No matching clause: hello
      there
      mysql=> (re-matches #".*" "hi\nthere")
      nil
      ```
      
      So just use use the message as the default to be returned which was the
      original intention.
      
      * Fix other databases as well
      051534fe
  20. Jun 08, 2022
    • metamben's avatar
      Support SSL with client auth for Mongo (#22977) · cef4c19b
      metamben authored
      We have already had support for server authentication based on custom
      certificates. This change adds support for authenticating the client
      based on custom client key and certificate.
      cef4c19b
  21. Jun 07, 2022
  22. Jun 06, 2022
    • dpsutton's avatar
      Bump google oauth version (#23165) · dc343cdc
      dpsutton authored
      CVE info:
      Package: com.google.oauth-client:google-oauth-client
      Installed Version: 1.31.5
      Vulnerability CVE-2021-22573
      Severity: HIGH
      Fixed Version: 1.33.3
      
      ```
        . metabase/bigquery-cloud-sdk /Users/dan/projects/work/metabase/modules/drivers/bigquery-cloud-sdk
          . com.google.cloud/google-cloud-bigquery 1.135.4
            . [truncated]
            . com.google.oauth-client/google-oauth-client 1.31.5
      
        . metabase/googleanalytics /Users/dan/projects/work/metabase/modules/drivers/googleanalytics
          . com.google.apis/google-api-services-analytics v3-rev20190807-1.32.1
            . com.google.api-client/google-api-client 1.32.1
              . com.google.oauth-client/google-oauth-client 1.31.5
      ```
      
      I looked into bumping
      com.google.apis/google-api-services-analytics-v3-rev20190807-1.32.1
      but as far as I can tell from
      https://search.maven.org/artifact/com.google.apis/google-api-services-analytics
      this is the most recent version so we have to just target the transitive
      dep.
      
      For bigquery, it seems we are pretty far behind. 1.135.4 was released in
      July 2021, the current version is 2.13.1 released in
      June. https://mvnrepository.com/artifact/com.google.cloud/google-cloud-bigquery
      I'm hesitant to bump this for a CVE but we need to prioritize this
      upgrade.
      
      After this PR:
      
      ```
      clj -Stree -A:drivers
      
        . metabase/bigquery-cloud-sdk /Users/dan/projects/work/metabase/modules/drivers/bigquery-cloud-sdk
          . com.google.cloud/google-cloud-bigquery 1.135.4
            . [truncated]
            X com.google.oauth-client/google-oauth-client 1.31.5 :older-version
      
          . com.google.oauth-client/google-oauth-client 1.33.3
        . metabase/googleanalytics /Users/dan/projects/work/metabase/modules/drivers/googleanalytics
          . com.google.apis/google-api-services-analytics v3-rev20190807-1.32.1
            . com.google.api-client/google-api-client 1.32.1
              X com.google.oauth-client/google-oauth-client 1.31.5 :older-version
      ```
      
      With the `X` meaning not included and 1.33.3 being top level included so
      using that version.
      dc343cdc
  23. May 31, 2022
    • dpsutton's avatar
      Bump transitive com.google.code.gson/gson (#23069) · d7b9ce1c
      dpsutton authored
      An alert from trivy:
      
      ```
      Package: com.google.code.gson:gson
      Installed Version: 2.8.7
      Vulnerability CVE-2022-25647
      Severity: HIGH
      Fixed Version: 2.8.9
      Link: CVE-2022-25647
      Trivy
      ```
      
      running `clj -Sdeps` will not show this dep because it is in two
      drivers. Instead running
      
      ```
      clj A:ee:drivers
      ```
      
      will find it.
      
      ```
      . metabase/bigquery-cloud-sdk /Users/dan/projects/work/metabase/modules/drivers/bigquery-cloud-sdk
          . com.google.cloud/google-cloud-bigquery 1.135.4
            . com.google.code.gson/gson 2.8.7
      ```
      
      and
      
      ```
        . metabase/googleanalytics /Users/dan/projects/work/metabase/modules/drivers/googleanalytics
          . com.google.apis/google-api-services-analytics v3-rev20190807-1.32.1
            . com.google.api-client/google-api-client 1.32.1
              . com.google.http-client/google-http-client-gson 1.39.2
                X com.google.code.gson/gson 2.8.6 :older-version
      ```
      
      This shows: google analytics depends on 2.8.6 but it is not actually
      used and bigquery-cloud-sdk depends on 2.8.7 which is the version that
      we are ending up with. (The `X` means excluded from the jar with reason
      being `:older-version`).
      
      More info:
      
      https://clojure.org/reference/dep_expansion#_tree_printing
      
      ```
      Trees are built from the trace log and include all considered nodes. Included nodes are prefixed with .. Excluded nodes are prefixed with X. The end of the line will contain the reason code (some codes are suppressed). The current set of reason codes (subject to change) are:
      
          :new-top-dep - included as top dep (suppressed)
      
          :new-dep - included as new dep (suppressed)
      
          :same-version - excluded, same as currently selected dep (suppressed)
      
          :newer-version - included, newer version than previously selected
      
          :use-top - excluded, same as top lib but not at top
      
          :older-version - excluded, older version than previously selected
      
          :excluded - excluded, node in parent path excluded this lib
      
          :parent-omitted - excluded, parent node deselected
      
          :superseded - excluded, this version was deselected
      
      ```
      
      THE FIX:
      
      Just put a top level dependency on the version we care about. No need to
      exclude the version. Technically only need it in one project as our
      build would always use the specified version. But in case anyone builds
      with just one or the other included in both for completeness with a
      comment indicating the other location.
      
      ```clojure
      com.google.code.gson/gson {:mvn/version "2.8.9"}
      ```
      
      PROOF OF FIX:
      
      clj -A:ee:drivers and look for gson
      
      ```
        . metabase/bigquery-cloud-sdk /Users/dan/projects/work/metabase/modules/drivers/bigquery-cloud-sdk
          . com.google.cloud/google-cloud-bigquery 1.135.4
            X com.google.code.gson/gson 2.8.7 :older-version
      ```
      
      ```
      . metabase/googleanalytics /Users/dan/projects/work/metabase/modules/drivers/googleanalytics
          . com.google.apis/google-api-services-analytics v3-rev20190807-1.32.1
            . com.google.api-client/google-api-client 1.32.1
              . com.google.http-client/google-http-client-gson 1.39.2
                X com.google.code.gson/gson 2.8.6 :older-version
          . com.google.code.gson/gson 2.8.9
      ```
      
      - 2.8.7 in bigquery-cloud-sdk now has an `X` and `:older-version`
      - 2.8.6 in google analytics still has `X` and `:older-version`
      - metabase/googleanalytics now has a top level (and included `.`) gson on 2.8.9
      d7b9ce1c
  24. May 25, 2022
    • metamben's avatar
      Indicate field for database error (#22804) · 3ecc3833
      metamben authored
      
      * Indicate field responsible for database error
      
      * Implement new interface
      
      * Make error showing logic a little easier to understand
      
      * Show individual field error in database form
      
      * Add backend tests for connection errors when adding a database
      
      * Add new form error component
      
      * Make database form error message pretty
      
      * Make main database form error message field bold
      
      * Change create account database form according to feedback
      
      * Fix failed E2E tests from UI changed.
      
      * Make it easier to tree-shake "lodash"
      
      * Change according to PR review
      
      * Cleanup + remove FormError (to be included in a separate PR)
      
      Co-authored-by: default avatarMahatthana Nomsawadi <mahatthana.n@gmail.com>
      3ecc3833
  25. May 23, 2022
  26. May 17, 2022
    • Bryan Maass's avatar
      bumps outdated deps versions to be current + drop support for java 8 (#22567) · c1b73ed6
      Bryan Maass authored
      * bumps outdated deps versions to be current
      
      * un-upgrade h2 and jetty
      
      * un-upgrade joda-time and kixi/stats
      
      * drop Java 8 support in circle CI config
      
      - things that used to rely on be-tests-java-8-ee now rely on be-tests-java-11-ee
      
      * remove java 8 from github health check matrix
      
      * revert toucan to 1.17.0
      
      * revert mariadb java client to 2.7.5
      
      * Back to 18, and handle new behavior
      
      toucan used to just look in *.models.<model-name> for models and just
      give up apparently. I made a feature that toucan will look in a model
      registry to create models rather than using the convention
      https://github.com/metabase/toucan/commit/762ad69defc1477423fa9423e9320ed318f7cfe7
      
      
      but now we're getting errors in these tests about maps vs models.
      
      ```clojure
      revision_test.clj:154
      Check that revisions+details pulls in user info and adds description
      expected: [#metabase.models.revision.RevisionInstance{:is_reversion false,
                                                            :is_creation false,
                                                            :message nil,
                                                            :user
                                                            {:id 1,
                                                             :common_name "Rasta Toucan",
                                                             :first_name "Rasta",
                                                             :last_name "Toucan"},
                                                            :diff
                                                            {:o1 nil, :o2 {:name "Tips Created by Day", :serialized true}},
                                                            :description nil}]
        actual: (#metabase.models.revision.RevisionInstance{:description nil,
                                                            :is_creation false,
                                                            :is_reversion false,
                                                            :user
                                                            {:id 1,
                                                             :first_name "Rasta",
                                                             :last_name "Toucan",
                                                             :common_name "Rasta Toucan"},
                                                            :message nil,
                                                            :diff
                                                            {:o1 nil,
                                                             :o2
                                                             #metabase.models.revision_test.FakedCardInstance{:name
                                                                                                              "Tips Created by Day",
                                                                                                              :serialized
                                                                                                              true}}})
      ```
      
      The only difference here is `:o2` is a
      `metabase.models.revision_test.FakedCardInstance` but still has the same
      keys, `:name`, and `:serialized`. So all is well, we're just able to
      make the model.
      
      So a few different fixes. Some are use `partial=` which doesn't care
      about record/map distinction. Some are just make the model, and some are
      turning them into maps for revision strings (which more closely mimics
      what the real revision stuff does):
      
      ```clojure
      (defn default-diff-map
        "Default implementation of `diff-map` which simply uses clojures `data/diff` function and sets the keys `:before` and `:after`."
        [_ o1 o2]
        (when o1
          (let [[before after] (data/diff o1 o2)]
            {:before before
             :after  after})))
      
      (defn default-diff-str
        "Default implementation of `diff-str` which simply uses clojures `data/diff` function and passes that on to `diff-string`."
        [entity o1 o2]
        (when-let [[before after] (data/diff o1 o2)]
          (diff-string (:name entity) before after)))
      ```
      
      So all in all this change impacts nothing in the app itself, because
      those models follow convention and are correct in
      `metabase.models.<model-name>` and are thus "modelified":
      
      ```clojure
      revision-test=> (revision/revisions Card 1)
      [#metabase.models.revision.RevisionInstance{:is_creation true,
                                                  :model_id 1,
                                                  :id 1,
                                                  :is_reversion false,
                                                  :user_id 2,
                                                  :timestamp #object[java.time.OffsetDateTime
                                                                     "0x77e037f"
                                                                     "2021-10-28T15:10:19.828539Z"],
                                                  :object #metabase.models.card.CardInstance
                                                  {:description nil,
                                                   :archived false,
                                                   :collection_position nil,
                                                   :table_id 5,
                                                   :database_id 2,
                                                   :enable_embedding false,
                                                   :collection_id nil,
                                                   :query_type :query,
                                                   :name "ECVYUHSWQJYMSOCIFHQC",
                                                   :creator_id 2,
                                                   :made_public_by_id nil,
                                                   :embedding_params nil,
                                                   :cache_ttl 1234,
                                                   :dataset_query {:database 2,
                                                                   :type :query,
                                                                   :query {:source-table 5,
                                                                           :aggregation [[:count]]}},
                                                   :id 1,
                                                   :display :scalar,
                                                   :visualization_settings {:global {:title nil}},
                                                   :dataset false,
                                                   :public_uuid nil},
                                                  :message nil,
                                                  :model "Card"}]
      ```
      
      so the model/no-model is just arbitrary distinction in the test. All of
      them in the actual app are turned into models:
      
      ```clojure
      (defn- do-post-select-for-object
        "Call the appropriate `post-select` methods (including the type functions) on the `:object` this Revision recorded.
        This is important for things like Card revisions, where the `:dataset_query` property needs to be normalized when
        coming out of the DB."
        [{:keys [model], :as revision}]
        ;; in some cases (such as tests) we have 'fake' models that cannot be resolved normally; don't fail entirely in
        ;; those cases
        (let [model (u/ignore-exceptions (db/resolve-model (symbol model)))]
          (cond-> revision
          ;; this line would not find a model previously for FakedCard and
          ;; just return the map. But now the registry in toucan _finds_ the
          ;; model defintion and returns the model'd map
            model (update :object (partial models/do-post-select model)))))
      
      (u/strict-extend (class Revision)
        models/IModel
        (merge models/IModelDefaults
               {:types       (constantly {:object :json})
                :pre-insert  pre-insert
                :pre-update  (fn [& _] (throw (Exception. (tru "You cannot update a Revision!"))))
                :post-select do-post-select-for-object}))
      ```
      
      * try using mssql-jdbc 10.2.1.jre11
      
      - Important that we get off the jre8 version
      
      * various fixes that needn't be reverted
      
      * Revert "various fixes that needn't be reverted"
      
      This reverts commit 2a820db0743d0062eff63366ebe7bc78b852e81f.
      
      * go back to using circle ci's java 11 docker image
      
      * java-16 (?) -> java-17
      
      * Revert "go back to using circle ci's java 11 docker image"
      
      This reverts commit b9b14c535a689f701d7e2541081164288c988c4e.
      
      Co-authored-by: default avatardan sutton <dan@dpsutton.com>
      c1b73ed6
  27. May 06, 2022
  28. May 02, 2022
    • dpsutton's avatar
      Persisted models schema (#21109) · c504a12e
      dpsutton authored
      * dir locals for api/let-404
      
      * Driver supports persisted model
      
      * PersistedInfo model
      
      far easier to develop this model with the following sql:
      
      ```sql
      create table persisted_info (
         id serial primary key not null
        ,db_id int references metabase_database(id) not null
        ,card_id int references report_card(id) not null
        ,question_slug text not null
        ,query_hash text not null
        ,table_name text not null
        ,active bool not null
        ,state text not null
        ,UNIQUE (db_id, card_id)
      )
      
      ```
      and i'll make the migration later. Way easier to just dorp table, \i
      persist.sql and keep developing without worrying about the migration
      having changed so it can't rollback, SHAs, etc
      
      * Persisting api (not making/deleting tables yet)
      
      http POST "localhost:3000/api/card/4075/persist" Cookie:$COOKIE -pb
      http DELETE "localhost:3000/api/card/4075/persist" Cookie:$COOKIE -pb
      
      useful from commandline (this is httpie)
      
      * Pull format-name into ddl.i
      
      * Postgres ddl
      
      * Hook up endpoints
      
      * move schema-name into interface
      
      * better jdbc connection management
      
      * Hotswap peristed tables into qp
      
      * clj-kondo fixes
      
      * docstrings
      
      * bad alias in test infra
      
      * goodbye testing format-name function
      
      left over. everything uses ddl.i/format-name and this rump was left
      
      * keep columns in persisted info
      
      columns that are in the persisted query. I thought about a tuple of
      [col-name type] instead of just the col-name. I didn't do this this type
      because I want to ensure that we compute the db-type in ONLY ONE WAY
      ever and i wasn't ready to commit to that yet. I'm not sure this is
      necessary in the future so it remains out now.
      
      Context: we hot-swap the persisted table in for the original
      query. Match up on query hash remaining the same. It continues to use
      the metadata from the original query and just `select cols from table`
      
      * Add migration for persisted_info table
      
      also removes the db_id. Don't know why i was thinking that was
      necessary. also means we don't need another unique constraint on (db_id,
      card_id) since we can just mark the card_id as unique. no idea what i
      was thinking.
      
      * fix ns in a sad manner :(
      
      far better to just have no alias to indicate it is required for side
      effects.
      
      * Dont hardcode a card-id :(:(:( my B
      
      * copy the PersistedInfo
      
      * ns cleanup, wrong alias, reflection warning
      
      * Check that state of persisted_info is persisted
      
      * api to enable persistence on a db
      
      i'm not wild about POST /api/database/:id/persist and POST
      /api/database/:id/unpersist but carrying on. left a note about it.
      
      So now you can enable persistence on a db, enable persistence on a model
      by posting to api/card/:id/persist and everything works.
      
      What does not work yet is the unpersisting or re-persisting of models
      when using the db toggle.
      
      * Add refresh_begin and refresh_end to persisted_info
      
      This information helps us with two bits:
      - when we need to chunk refreshing models, this lets us order by
      staleness so we can refresh a few models and pick up later
      - if we desire, we can look at the previous elapsed time of refreshes
      and try to gauge amount of work we want. This gives us a bit of
      look-ahead. We can of course track our progress as we go but there's no
      way to know if the next refresh might take an hour. This gives us a bit
      of insight.
      
      * Refresh tables every 8 hours ("0 0 0/8 * * ? *")
      
      Tables are refreshed every 8 hours. There is one single job doing this
      named "metabase.task.PersistenceRefresh.job" but it has 0 triggers by
      default. Each database with persisted models will add a trigger to this
      to refresh those models every 8 hours.
      
      When you unpersist a model, it will immediately remove the table and
      then delete the persisted_info record.
      
      When you mark a database as persist false, it will immediately mark all
      persisted_info rows as inactive and deleteable, and unschedule its
      trigger. A background thread will then start removing the tables.
      
      * Schedule refreshing on startup, watching for already scheduled
      
      does not allow for schedule changes but that's a future endeavor
      
      * appease our linter overlords
      
      * Dynamic var to inbhit persistence when refreshing
      
      also, it checked the state against "active" instead of "persisted" which
      is really freaky. how has this worked in the past if thats the case?
      
      * api docstrings on card persist
      
      * docstring
      
      * Don't sync the persisted schemas
      
      * Fix bad sql when no deleteable rows
      
      getting error with bad sql when there were no ids
      
      * TaskHistory for refreshing
      
      * Add created_at to persist_info table
      
      helpful if this ever ends up in the audit section
      
      * works on redshift
      
      hooked up the hierarchy and redshift is close enought that it just works
      
      * Remove persist_info record after deleting "deleteable"
      
      * Better way to check that something exists
      
      * POST /api/<card-id>/refresh
      
      api to refresh a model's persisted record
      
      * return a 204 from refreshing
      
      * Add buttons to persist/unpersist a database and a model for PoC (#21344)
      
      * Redshift and postgres report true for persist-models
      
      there are separate notions of persistence is possible vs persistence is
      enabled. Seems like we're just gonna check details for enabled and rely
      on the driver multimethod for whether it is possible.
      
      * feature for enabled, hydrate card with persisted
      
      two features: :persist-models for which dbs support it, and
      :persist-models-enabled for when that option is enabled.
      
      POST to api/<card-id>/unpersist
      
      hydrate persisted on cards so FE can display persist/unpersist for
      models
      
      * adjust migration number
      
      * remove deferred-tru :shrug:
      
      
      
      * conditionally hydrate persisted on models only
      
      * Look in right spot for persist-models-enabled
      
      * Move persist enabled into options not details
      
      changing details recomposes the pool, which is especially bad now that
      we have refresh tasks going on reusing the same connection
      
      * outdated comment
      
      * Clean up source queries from persisted models
      
      their metadata might have had [:field 19 nil] field_refs and we should
      substitute just [:field "the-name" {:base-type :type/Whatever-type}
      since it will be a select from a native query.
      
      Otherwise you get the following:
      
      ```
      2022-03-31 15:52:11,579 INFO api.dataset :: Source query for this query is Card 4,088
      2022-03-31 15:52:11,595 WARN middleware.fix-bad-references :: Bad :field clause [:field 4070 nil] for field "category.catid" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,596 WARN middleware.fix-bad-references :: Bad :field clause [:field 4068 nil] for field "category.catgroup" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,596 WARN middleware.fix-bad-references :: Bad :field clause [:field 4071 nil] for field "category.catname" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,596 WARN middleware.fix-bad-references :: Bad :field clause [:field 4069 nil] for field "category.catdesc" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4070 nil] for field "category.catid" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4068 nil] for field "category.catgroup" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4071 nil] for field "category.catname" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,611 WARN middleware.fix-bad-references :: Bad :field clause [:field 4069 nil] for field "category.catdesc" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,622 WARN middleware.fix-bad-references :: Bad :field clause [:field 4070 nil] for field "category.catid" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,622 WARN middleware.fix-bad-references :: Bad :field clause [:field 4068 nil] for field "category.catgroup" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,622 WARN middleware.fix-bad-references :: Bad :field clause [:field 4071 nil] for field "category.catname" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      2022-03-31 15:52:11,623 WARN middleware.fix-bad-references :: Bad :field clause [:field 4069 nil] for field "category.catdesc" at [:fields]: clause should have a :join-alias. Unable to infer an appropriate join. Query may not work as expected.
      ```
      I think its complaining that that table is not joined in the query and
      giving up.
      
      While doing this i see we are hitting the database a lot:
      
      ```
      2022-03-31 22:52:18,838 INFO api.dataset :: Source query for this query is Card 4,111
      2022-03-31 22:52:18,887 INFO middleware.fetch-source-query :: Substituting cached query for card 4,111 from metabase_cache_1e483_229.model_4111_redshift_c
      2022-03-31 22:52:18,918 INFO middleware.fetch-source-query :: Substituting cached query for card 4,111 from metabase_cache_1e483_229.model_4111_redshift_c
      2022-03-31 22:52:18,930 INFO middleware.fetch-source-query :: Substituting cached query for card 4,111 from metabase_cache_1e483_229.model_4111_redshift_c
      ```
      
      I tried to track down why we are doing this so much but couldn't get
      there.
      
      I think I need to ensure that we are using the query store annoyingly :(
      
      * Handle native queries
      
      didn't nest the vector in the `or` clause correctly. that was truthy
      only when the mbql-query local was truthy. Can't put the vector `[false
      mbql-query]` there and rely on that behavior
      
      * handle datetimetz in persisting
      
      * Errors saved into persisted_info
      
      * Reorder migrations to put v43.00-047 before 048
      
      * correct arity mismatch in tests
      
      * comment in refresh task
      
      * GET localhost:3000/api/persist
      
      Returns persisting information:
      - most information from the `persist_info` table. Excludes a few
      columns (query_hash, question_slug, created_at)
      - adds database name and card name
      - adds next fire time from quartz scheduling
      
      ```shell
      ❯ http GET "localhost:3000/api/persist" Cookie:$COOKIE -pb
      [
          {
              "active": false,
              "card_name": "hooking reviews to events",
              "columns": [
                  "issue__number",
                  "actor__login",
                  "user__login",
                  "submitted_at",
                  "state"
              ],
              "database_id": 19,
              "database_name": "pg-testing",
              "error": "No method in multimethod 'field-base-type->sql-type' for dispatch value: [:postgres :type/DateTimeWithLocalTZ]",
              "id": 4,
              "next-fire-time": "2022-04-06T08:00:00.000Z",
              "refresh_begin": "2022-04-05T20:16:54.654283Z",
              "refresh_end": "2022-04-05T20:16:54.687377Z",
              "schema_name": "metabase_cache_1e483_19",
              "state": "error",
              "table_name": "model_4077_hooking_re"
          },
          {
              "active": true,
              "card_name": "redshift Categories",
              "columns": [
                  "catid",
                  "catgroup",
                  "catname",
                  "catdesc"
              ],
              "database_id": 229,
              "database_name": "redshift",
              "error": null,
              "id": 3,
              "next-fire-time": "2022-04-06T08:00:00.000Z",
              "refresh_begin": "2022-04-06T00:00:01.242505Z",
              "refresh_end": "2022-04-06T00:00:01.825512Z",
              "schema_name": "metabase_cache_1e483_229",
              "state": "persisted",
              "table_name": "model_4088_redshift_c"
          }
      ]
      
      ```
      
      * include card_id in /api/persist
      
      * drop table if exists
      
      * Handle rescheduling refresh intervals
      
      There is a single global value for the refresh interval. The API
      requires it to be 1<=value<=23. There is no validation if someone
      changes the value in the db or with an env variable. Setting this to a
      nonsensical value could cause enormous load on the db so they shouldn't
      do that.
      
      On startup, unschedule all tasks and then reschedule them to make sure
      that they have the latest value.
      
      One thing to note: there is a single global value but i'm making a task
      for each database. Seems like an obvious future enhancement so I don't
      want to deal with migrations. Figure this gives us the current spec
      behavior to have a trigger for each db with the same value and lets us
      get more interesting using the `:options` on the database in the
      future.
      
      * Mark as admin not internal
      
      lets it show up in `api/setting/` . I'm torn on how special this value
      is. Is it the setting code's requirement to invoke the reschedule
      refresh triggers or should that be on the setting itself.
      
      It feels "special" and can do a lot of work from such just setting an
      integer. There's a special endpoint to set it which is aware, and thus
      would be a bit of an error to set this setting through the more
      traditional setting endpoint
      
      * Allow for "once a day" refresh interval
      
      * Global setting to enable/disable
      
      post api/persist/enable
      post api/persist/disable
      
      enable allows for other scheduling operations (enabling on a db, and
      then on a model).
      
      Disable will
      - update each enabled database and disable in options
      - update each persisted_info record and set it inactive and state
      deleteable
      - unschedule triggers to refresh
      - schedule task to unpersist each model (deleting table and associated
      pesisted_info row)
      
      * offset and limits on persisted info list
      
      ```shell
      http get "localhost:3000/api/persist?limit=1&offset=1" Cookie:$COOKIE -pb
      {
          "data": [
              {
                  "active": true,
                  "card_id": 4114,
                  "card_name": "Categories from redshift",
                  "columns": [
                      "catid",
                      "catgroup",
                      "catname",
                      "catdesc"
                  ],
                  "database_id": 229,
                  "database_name": "redshift",
                  "error": null,
                  "id": 12,
                  "next-fire-time": "2022-04-08T00:00:00.000Z",
                  "refresh_begin": "2022-04-07T22:12:49.209997Z",
                  "refresh_end": "2022-04-07T22:12:49.720232Z",
                  "schema_name": "metabase_cache_1e483_229",
                  "state": "persisted",
                  "table_name": "model_4114_categories"
              }
          ],
          "limit": 1,
          "offset": 1,
          "total": 2
      }
      ```
      
      * Include collection id, name, and authority level
      
      * Include creator on persisted-info records
      
      * Add settings to manage model persistence globally (#21546)
      
      * Common machinery for running steps
      
      * Add model cache refreshes monitoring page (#21551)
      
      * don't do shenanigans
      
      * Refresh persisted and error persisted_info rows
      
      * Remarks on migration column
      
      * Lint nits (sorted-ns and docstrings)
      
      * Clean up unused function, docstring
      
      * Use `onChanged` prop to call extra endpoints (#21593)
      
      * Tests for persist-refresh
      
      * Reorder requires
      
      * Use quartz for individual refreshing for safety
      
      switch to using one-off jobs to refresh individual tables. Required
      adding some job context so we know which type to run.
      
      Also, cleaned up the interface between ddl.interface and the
      implementations. The common behaviors of advancing persisted-info state,
      setting active, duration, etc are in a public `persist!` function which
      then calls to the multimethod `persist!*` function for just the
      individual action on the cached table.
      
      Still more work to be done:
      - do we want creating and deleting to be put into this type of system?
      Quite possible
      - we still don't know if a query is running against the cached table
      that can prevent dropping the table. Perhaps using some delay to give
      time for any running query to finish. I don't think we can easily solve
      this in general because another instance in the cluster could be
      querying against it and we don't have any quick pub/sub type of
      information sharing. DB writes would be quite heavy.
      - clean up the ddl.i/unpersist method in the same way we did refresh and
      persist. Not quite clear what to do about errors, return values, etc.
      
      * Update tests with more job-info in context
      
      * Fix URL type conflicts
      
      * Whoops get rid of our Thread/sleep test :)
      
      * Some tests for the new job-data, clean up task history saving
      
      * Fix database model persistence button states (#21636)
      
      * Use plain database instance on form
      
      * Fix DB model persistence toggle button state
      
      * Add common `getSetting` selector
      
      * Don't show caching button when turned off globally
      
      * Fix text issue
      
      * Move button from "Danger zone"
      
      * Fix unit test
      
      * Skip default setting update request for model persistence settings (#21669)
      
      * Add a way to skip default setting update request
      
      * Skip default setting update for persistence
      
      * Add changes for front end persistence
      
      - Order by refresh_begin descending
      - Add endpoint /persist/:persisted-info-id for fetching a single entry.
      
      * Move PersistInfo creation into interface function
      
      * Hide model cache monitoring page when caching is turned off (#21729)
      
      * Add persistence setting keys to `SettingName` type
      
      * Conditionally hide "Tools" from admin navigation
      
      * Conditionally hide caching Tools tab
      
      * Add route guard for Tools
      
      * Handle missing settings during init
      
      * Add route for fetching persistence by card-id
      
      * Wrangling persisted-info states
      
      Make quartz jobs handle any changes to database.
      Routes mark persisted-info state and potentially trigger jobs.
      Job read persisted-info state.
      
      Jobs
      
      - Prune
      -- deletes PersistedInfo `deleteable`
      -- deletes cache table
      
      - Refresh
      -- ignores `deletable`
      -- update PersistedInfo `refreshing`
      -- drop/create/populate cache table
      
      Routes
      
      card/x/persist
      - creates the PersistedInfo `creating`
      - trigger individual refresh
      
      card/x/unpersist
      - marks the PersistedInfo `deletable`
      
      database/x/unpersist
      - marks the PersistedInfos `deletable`
      - stops refresh job
      
      database/x/persist
      - starts refresh job
      
      /persist/enable
      - starts prune job
      
      /persist/disable
      - stops prune job
      - stops refresh jobs
      - trigger prune once
      
      * Save the definition on persist info
      
      This removes the columns and query_hash columns in favor of definition.
      
      This means, that if the persisted understanding of the model is
      different than the actual model during fetch source query we won't
      substitute.
      
      This makes sure we keep columns and datatypes in line.
      
      * Remove columns from api call
      
      * Add a cache section to model details sidebar (#21771)
      
      * Extract `ModelCacheRefreshJob` type
      
      * Add model cache section to sidebar
      
      * Use `ModelCacheRefreshStatus` type name
      
      * Add endpoint to fetch persistence info by model ID
      
      * Use new endpoint at QB
      
      * Use `CardId` from `metabase-types/api`
      
      * Remove console.log
      
      * Fix `getPersistedModelInfoByModelId` selector
      
      * Use `t` instead of `jt`
      
      * Provide seam for prune testing
      
      - Fix spelling of deletable
      
      * Include query hash on persisted_info
      
      we thought we could get away with just checking the definition but that
      is schema shaped. So if you changed a where clause we should invalidate
      but the definition would be the same (same table name, columns with
      types).
      
      * Put random hash in PersistedInfo test defaults
      
      * Fixing linters
      
      * Use new endpoint for model cache refresh modal (#21742)
      
      * Use new endpoint for cache status modal
      
      * Update refresh timestamps on refresh
      
      * Move migration to 44
      
      * Dispatch on initialized driver
      
      * Side effects get bangs!
      
      * batch hydrate :persisted on cards
      
      * bang on `models.persisted-info/make-ready!`
      
      * Clean up a doc string
      
      * Random fixes: docstrings, make private, etc
      
      * Bangs on side effects
      
      * Rename global setting to `persisted-models-enabled`
      
      felt awkward (enabled-persisted-models) and renamed to make it a bit
      more natural. If you are developing you need to set the new value to
      true and then your state will stay the same
      
      * Rename parameter for site-uuid-str for clarity
      
      * Lint cleanups
      
      interesting that the compojure one is needed for clj-kondo. But i guess
      it makes sense since there is a raw `GET` in `defendpoint`.
      
      * Docstring help
      
      * Unify type :type/DateTimeWithTZ and :type/DateTimeWithLocalTZ
      
      both are "TIMESTAMP WITH TIME ZONE". I had got an error and saw that the
      type was timestamptz so i used that. They are synonyms although it might
      require an extension.
      
      * Make our old ns linter happy
      
      Co-authored-by: default avatarAlexander Polyankin <alexander.polyankin@metabase.com>
      Co-authored-by: default avatarAnton Kulyk <kuliks.anton@gmail.com>
      Co-authored-by: default avatarCase Nelson <case@metabase.com>
      c504a12e
  29. Apr 29, 2022
  30. Apr 27, 2022
    • Case Nelson's avatar
      Validate datasets are found when checking bigquery (#22144) · f4e49dbd
      Case Nelson authored
      * Validate datasets are found when checking bigquery
      
      Fixes #19709
      
      * Address PR feedback
      
      Made a general mechanism to pass expected messages to users in
      api/database via ex-info. This allows us to suppress logging for
      "unexceptional" exceptions that one can expect to hit while setting up
      drivers.
      
      * Only validate when filters are set
      
      Also removed the dataset list from the exception as it's not surfaced to
      users.
      f4e49dbd
    • dpsutton's avatar
      Fix errors when downgrading then upgrading to bigquery driver (#22121) · 2ada1fc4
      dpsutton authored
      This issue has a simple fix but a convoluted story. The new bigquery
      driver handles multiple schemas and puts that schema (dataset-id) in the
      normal spot on a table in our database. The old driver handled only a
      single schema by having that dataset-id hardcoded in the database
      details and leaving the schema slot nil on the table row.
      
      ```clojure
      ;; new driver describe database:
      [{:name "table-1" :schema "a"}
       {:name "table-2" :schema "b"}]
      
      ;; old driver describe database (with dataset-id "a" on the db):
      [{:name "table-1" :schema nil}]
      ```
      
      So if you started on the new driver and then downgraded for some reason,
      the table sync would see you had tables with schemas, but when it
      enumerated the tables in the database on the next sync, would see tables
      without schemas. It did not unify these two together, nor did it archive
      the tables with a schema. You ended up with both copies in the
      database, all active.
      
      ```clojure
      [{:name "table-1" :schema "a"}
       {:name "table-2" :schema "b"}
       {:name "table-1" :schema nil}]
      ```
      
      If you then tried to migrate back to the newer driver, we migrated them
      as normal: since the old driver only dealt with one schema but left it
      nil, put that dataset-id on all of the tables connected to this
      connection.
      
      But since the new driver and then the old driver created copies of the
      same tables, you would end up with a constraint violation: tables with
      the same name and, now after the migration, the same schema. Ignore this
      error and the sync in more recent versions will correctly inactivate the
      old tables with no schema.
      
      ```clojure
      [{:name "table-1" :schema "a"}  <-|
       {:name "table-2" :schema "b"}    | constraint violation
       {:name "table-1" :schema "a"}] <-|
      
      ;; preferrable:
      [{:name "table-1" :schema "a"}
       {:name "table-2" :schema "b"}
       {:name "table-1" :schema nil :active false}]
      ```
      2ada1fc4
  31. Apr 26, 2022
    • dpsutton's avatar
      Bump bigquery version to first version that supports SNAPSHOT tables (#22049) · 54d064fc
      dpsutton authored
      Fixes #19860
      
      SNAPSHOT tables in bigquery hold diffs from an underlying table:
      https://cloud.google.com/bigquery/docs/table-snapshots-intro. But the
      support in the sdk only came in 1.135.0 :
      https://github.com/googleapis/java-bigquery/blob/main/CHANGELOG.md#11350-2021-06-28
      
      I picked the most recent 1.135 version.
      
      Running
      
      ```shell
      clj -A:dev:ee:ee-dev:drivers:drivers-dev -Stree
      ```
      
      Shows conflicts on
      
      ```
      X google-http-client-jackson2 1.39.2 :older-version
      ; using 1.39.2-sp.1 from google analytics
      
      X com.fasterxml.jackson.core/jackson-core 2.12.3 :older-version
      ; from cheshire we have 2.12.4
      
      X com.google.http-client/google-http-client 1.39.2 :superseded
      ; using 1.39.2-sp.1 from google-http-client-jackson2 (1.39.2-sp1)
      
      X commons-codec/commons-codec 1.15 :use-top
      ; pinned to this version at top level
      
      X com.google.guava/guava 30.1.1-android :use-top
      ; pinned to 31.0.1-jre top level
      ```
      
      So I think this change is quite safe. After the release we should
      investigate the breaking changes that come in the 2.0.0 release and look
      into getting onto 2.10.10. This version worked locally for me but I
      don't want to introduce that into the release just yet.
      54d064fc
  32. Apr 22, 2022
    • Cam Saul's avatar
      Handle March 31st + 3 months (June 31st?) for Oracle (#21841) · b9cedccc
      Cam Saul authored
      * Handle March 31st + 3 months for Oracle (#10072)
      
      * Optimize out some casting in Oracle
      
      * rx util support varargs inside `opt`
      
      * hx/ math operators like + and - should propagate type information
      
      * Some dox tweaks
      
      * Fix SQLite busted behavior
      
      * Avoid unneeded casting in Vertica when adding temporal intervals
      
      * Lint error fixes
      
      * BigQuery fix for #21969
      
      * Add testing context for tests for #21968 and #21971
      b9cedccc
    • Howon Lee's avatar
      Group-by fix for JSON columns. (#21741) · bad129cd
      Howon Lee authored
      Group-bys didn't work because you need two instances of the field and you couldn't have two instances of the field be considered by the postgres backend as the same. Ported over the BigQuery fix to this to apply to JSON columns as well.
      bad129cd
Loading