Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Dec 14, 2024
  2. Dec 12, 2024
  3. Nov 19, 2024
  4. Nov 16, 2024
  5. Nov 15, 2024
  6. Nov 12, 2024
    • Cam Saul's avatar
      Switch to Java 21 (#48854) · b1ad9a1a
      Cam Saul authored
      
      * Try snowflake 3.19.0
      
      * Deafault to java 21 for drivers
      
      * Trigger CI
      
      * Try running things with --add-opens to see if it solves our problems
      
      * Fix `update-view-dashboard-timestamp-test`
      
      * Try snowflake 3.19.0
      
      * Deafault to java 21 for drivers
      
      * Try running things with --add-opens to see if it solves our problems
      
      * Fix `update-view-dashboard-timestamp-test`
      
      * Update deps.edn
      
      * Switch to Java 21
      
      * Docker image needs to use --add-opens option
      
      * Add note about change to Java 21 to driver changelog
      
      ---------
      
      Co-authored-by: default avatarNemanja <31325167+nemanjaglumac@users.noreply.github.com>
      Co-authored-by: default avatarLuis Paolini <paoliniluis@gmail.com>
      b1ad9a1a
  7. Nov 10, 2024
  8. Nov 06, 2024
  9. Nov 05, 2024
  10. Oct 25, 2024
    • Chris Truter's avatar
      Analyze tables for compound queries (#49144) · c6683ce7
      Chris Truter authored
      ### Description
      
      This PR updates Macaw to the latest version, which exposes a new analyzer which handles compound queries.
      
      It also finishes the migration to using tagged-union result maps consistently across all the interfaces.
      
      This should also remove the atrociously verbose analysis error messages in CI.
      c6683ce7
  11. Oct 23, 2024
  12. Oct 22, 2024
  13. Oct 21, 2024
  14. Oct 16, 2024
  15. Oct 14, 2024
    • Ngoc Khuat's avatar
      [Notification] Migrate emails (#48392) · a1d8face
      Ngoc Khuat authored
      * [Notification] Migrate user invited email (#48215)
      
      * [Notification] Migrate alert create email (#48292)
      
      * [Notification] Migrate slack token error email (#48333)
      a1d8face
    • Cam Saul's avatar
      Collapse `metabase.shared.*` namespaces (#48646) · 6024bf01
      Cam Saul authored
      * Collapse `metabase.shared.*` namespaces
      
      * Fix Kondo warnings
      
      * Does updating the stories-data keys fix the failing tests?
      
      * Appease msgcat
      
      * Appease msgcat
      
      * Fix typo
      
      * Make the build happy
      
      * Appease fslint
      6024bf01
  16. Oct 08, 2024
    • lbrdnk's avatar
      [Databricks] Address initial remarks (#48377) · 72495873
      lbrdnk authored
      * Address initial remarks
      
      * Extract hive-like to separate module and set it as dependency
      
      * Remove hive-like also from spark
      72495873
    • Cam Saul's avatar
      :race_car::rocket::race_car::rocket: :race_car::rocket: SHAVE 7 MINUTES OFF OF NON-CORE DRIVER TEST RUNS IN CI :race_car::rocket::race_car::rocket: :race_car::rocket: (#47681) · cd4d7646
      Cam Saul authored
      * Parallel driver tests PoC
      
      * Set fail-fast to false for now
      
      * Try splitting up non-driver tests to see how broken tests are
      
      * Whoops fix plain BE tests
      
      * Ok nvm I'll test this in another branch
      
      * Fix fail-fast
      
      * Experiment with the improved Hawk split logic
      
      * Fix some broken/flaky tests
      
      * Experiment: try splitting MySQL 8 tests into FOUR jobs
      
      * Divide other Postgres and MySQL tests up and use num-partitions = 2
      
      * Another test fix :wrench:
      
      * Flaky test fix :wrench:
      
      * Try making more stuff fast
      
      * Make athena fast??
      
      * Fix a few more things
      
      * Test fixes? :wrench:
      
      * Fix configs
      
      * Fix Mongo job syntax
      
      * Fix busted test from #46942
      
      * Fix Mongo config again
      
      * wait-for-port needs to specify shell I guess
      
      * More cleanup
      
      * await-port can't have a timeout-minutes I guess
      
      * Let's only parallelize MySQL for now.
      
      * Cleanup action
      
      * Cleanup wait-for-port action
      
      * Fix another flaky test
      
      * NOW driver tests will be FAST
      
      * Need to mark driver tests too
      
      * Fix wrong tag
      
      * Use Hawk 1.0.5
      
      * Fix busted metabase.public-settings-test/landing-page-setting-test
      
      * Fix busted `metabase.api.database-test/get-database-test` etc. (hopefully)
      
      * Fix busted `metabase.sync.sync-metadata.fields-test/sync-fks-and-fields-test` for Oracle
      
      * Maybe this fixed `metabase.query-processor.middleware.permissions-test/e2e-ignore-user-supplied-perms-test` maybe not
      
      * Fix busted metabase.api.dashboard-test/dependent-metadata-test because endpoint had differemt sort order than test
      
      * Ok my test fix did not work
      
      * Fix metabase.sync.sync-metadata.fields-test/sync-fks-and-fields-test for Redshift
      
      * Better test name
      
      * More test fixes :wrench:
      
      * Schema fix
      
      * PR feedback
      
      * Split off test partitioning into separate PR
      
      * Fix failing Oracle tests
      
      * Another round of test fixes, hopefully
      
      * Fix failing Redshift tests
      
      * Maybe the last round of test fixes
      
      * Fix Oracle
      
      * Fix stray line
      cd4d7646
  17. Sep 30, 2024
    • John Swanson's avatar
      Do not cache all token check failures (#48147) · 0ef2052f
      John Swanson authored
      * Do not cache all token check failures
      
      We want to cache token checks to avoid an issue where we repeatedly ask
      the store "hey, is this token valid?? is this token valid?? is this
      token valid??" for the same token.
      
      However, transient errors can also occur. For example, maybe a network
      issue causes the HTTP request to fail entirely. In this case, if we
      cache the result, the user needs to restart metabase (or wait 5 minutes
      until the cache is cleared) before they can attempt to validate their
      token again.
      
      This PR moves the cache logic deeper into the stack. We want to cache
      "successful" responses from the store API - cases where the store has
      told us categorically that the token is or is not valid. We don't want
      or need to cache other things that might happen. Maybe your token isn't
      the right length - we can recalculate that, it's ok. Maybe you get a 503
      error from the Store - we should let you retry. Maybe your network is
      having issues and you can't contact the Store at all - again, we should
      let you retry.
      
      The one potential issue I see here is that if the store goes down, we'll
      massively increase the number of requests we send to the store,
      potentially making it harder to recover. If this is a concern, I can add
      a circuit breaker: if we repeatedly get errors back from the store, back
      off and stop making requests for a while.
      
      * Add a circuit breaker to store API requests
      
      In the pathological case where the store goes down for >5 minutes, the
      cache will expire and all instances everywhere will start repeatedly
      making requests for token validation at once. This might make recovering
      from an outage more difficult.
      
      This adds a circuit breaker around the API request. If the call
      repeatedly throws (5XX errors, socket timeouts, etc.) then we'll pause
      for 1 minute, during which time all calls to token validation will
      immediately fail without making any request to the API.
      
      After one minute, we'll allow one request through to the API. If it
      succeeds, we'll go back to normal operation. Otherwise, we'll wait
      another minute.
      0ef2052f
  18. Sep 26, 2024
    • lbrdnk's avatar
      Databricks JDBC driver (#42263) · c04928d5
      lbrdnk authored
      * Databricks JDBC driver base
      
      * Add databricks CI job
      
      * WIP data loading -- it works, further cleanup needed
      
      * Cleanup
      
      * Implement ->honeysql to enable data loading
      
      * Hardcode catalog job var
      
      * Implement driver methods and update tests
      
      * Derive hive instead of sql-jdbc
      
      * Cleanup leftovers after deriving hive
      
      * Run databricks tests on push
      
      * Cleanp and enable set-timezone
      
      * Disable database creation by tests
      
      * Add Databricks to broken drivers for timezone tests
      
      * Exclude Databricks from test
      
      * Enable have-select-privilege?-test
      
      * Restore sql-jdbc-drivers-using-default-describe-table-or-fields-impl post rebase
      
      * Restore joined-date-filter-test
      
      * Adjust to work with dataset definition tests
      
      * Adjust alternative date tests
      
      * Remove leftover reflecttion warning set
      
      * Update test exts
      
      * cljfmt vscode
      
      * Add databricks to kondo drivers
      
      * Update metabase-plugin.yaml
      
      * Update databricks_jdbc.clj
      
      * Rework test extensions
      
      * Update general data loading code to work with Databricks
      
      * Reset tests to orig
      
      * Use DateTimeWithLocalTZ for TIMESTAMP database type
      
      * Convert to LocalDateTime for set-parameter
      
      * Update test extensions field-base-type->sql-tyoe
      
      * Update database-type->base-type
      
      * Enable creation of time columns in test data even though not supported
      
      * Fix typo
      
      * Update tests
      
      * Udpate tests
      
      * Update drivers.yml
      
      * Disable dynamic dataset loading tests
      
      * Adjust the iso-8601-text-fields-should-be-queryable-date-test
      
      * Update load-data/row-xform
      
      * Add time type exception to test
      
      * Update test data loading and enable test
      
      * Whitespace
      
      * Enable all driver jobs
      
      * Update comment
      
      * Make catalog mandatory
      
      * Remove comment
      
      * Remove log level from spec generation
      
      * Update sql.qp/datetime-diff
      
      * Update read-column-thunk
      
      * Remove comment
      
      * Simplify date-time->results-local-date-time
      
      * Update comment
      
      * Move definitions
      
      * Update test extension types mapping
      
      * Remove now obsolete ddl/insert-rows-honeysql-form implementation
      
      * Update sql-jdbc.conn/connection-details->spec for perturb-db-details
      
      * Update load-data/do-insert!
      
      * Remove ssh tunnel from driver as tests do not work with it
      
      * Update test
      
      * Promote ::dynamic-dataset-loading to :test/dynamic-dataset-loading and modify corresponding tests
      
      * Adjust to broken TIMESTAMP_NTZ sync
      
      * Update read-column-thunk to return timestamps always in Z
      
      * Comment
      
      * Disable tests for dynamic datasets
      
      * Return spark jobs into drivers.yml
      
      * Update Databricks CI catalog
      
      * Remove vscode cljfmt tweak
      
      * Update iso-8601-text-fields-expected-rows
      
      * Update datetime-diff
      
      * Formatting
      
      * cljfmt
      
      * Add placeholder test
      
      * Remove comment
      
      * cljfmt
      
      * Use EnableArrow=0 connection param
      
      * Remove comment
      
      * Comment
      
      * Update tests
      
      * cljfmt
      
      * Update driver's deps.edn
      
      * Update tests
      
      * Implement alternative `describe-table`
      
      * WIP Workaround for timestamp_ntz sync, will be thrown away probably
      
      * Update metabase-plugin.yaml with schema filters
      
      * Update driver to use schema filters and remove now redundant sync implemnetations
      
      * Update tests
      
      * Update tests extensions
      
      * Update test
      
      * Add feature flags for fast sync
      
      * Implement describe-fields
      
      * Implement describe-fks-sql
      
      * Enable fast sync features
      
      * Use full_data_type
      
      * Comment
      
      * Add exception for timestamp_ntz columns to new sync code
      
      * Implement db-default-timezone
      
      * Add timestamp_ntz ignored test
      
      * Add db-default-timezone-test
      
      * Fix typo
      
      * Update setReadOnly
      
      * Add comment on setAutoCommit
      
      * Update chunk-size
      
      * Add timezone-in-set-and-read-functions-test
      
      * Drop Athena from driver exceptions
      
      * Use set/intersection instead of a filter
      
      * Add explicit fast-sync tests
      
      * Update describe-fields-sql and add comment
      
      * Add preprocess-additional-options
      
      * Add leading semicolon test
      
      * Disable dataset creation and update comment
      
      * Rename driver to `databricks`
      
      * Use old secret names
      
      * Fix wrongly copied hsql list
      
      * Temporarily allow database creation
      
      * Add *allow-database-deletion*
      
      * Temporarily allow database creation
      
      * Disable database creation
      
      * cljfmt
      
      * cljfmt
      c04928d5
  19. Sep 10, 2024
  20. Sep 06, 2024
  21. Sep 04, 2024
  22. Sep 02, 2024
  23. Aug 31, 2024
  24. Aug 30, 2024
  25. Aug 29, 2024
    • Cam Saul's avatar
      Update Kondo to `2024.08.01` and add `deps.edn` aliases to run from the JVM (#47370) · 7fb88340
      Cam Saul authored
      * Add `clojure -M:kondo` and `clojure -M:kondo:kondo/all` and bump version
      
      * Fix Kondo errors
      
      * Fix Kondo+LSP issues with `defendpoint`, `defenterprise`, etc.
      
      * Use replace-deps instead of deps for speed
      
      * Ok apparently maybe we do need to copy configs when we run Kondo on CI
      
      * Oops `./bin/kondo.sh` should not try to use `clj-kondo`
      
      * Remove references to GA driver folders
      7fb88340
  26. Aug 28, 2024
  27. Aug 22, 2024
  28. Aug 21, 2024
Loading