Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Oct 04, 2024
  2. Oct 03, 2024
  3. Oct 02, 2024
    • appleby's avatar
      Add tests for exporting self-joined renamed columns (#48244) · e2b74d79
      appleby authored
      * Add BE test for exporting self-joined renamed columns
      * Add e2e test for exporting self-joined renamed columns
      
      See also
      
      - Not renamed fields in a same table join inherit the renamed name in exports #48046
      - backport: Support both name & field ref-based column keys in viz settings on read and upgrade on write #48243
      Unverified
      e2b74d79
  4. Oct 01, 2024
  5. Sep 30, 2024
    • John Swanson's avatar
      Do not cache all token check failures (#48147) · 0ef2052f
      John Swanson authored
      * Do not cache all token check failures
      
      We want to cache token checks to avoid an issue where we repeatedly ask
      the store "hey, is this token valid?? is this token valid?? is this
      token valid??" for the same token.
      
      However, transient errors can also occur. For example, maybe a network
      issue causes the HTTP request to fail entirely. In this case, if we
      cache the result, the user needs to restart metabase (or wait 5 minutes
      until the cache is cleared) before they can attempt to validate their
      token again.
      
      This PR moves the cache logic deeper into the stack. We want to cache
      "successful" responses from the store API - cases where the store has
      told us categorically that the token is or is not valid. We don't want
      or need to cache other things that might happen. Maybe your token isn't
      the right length - we can recalculate that, it's ok. Maybe you get a 503
      error from the Store - we should let you retry. Maybe your network is
      having issues and you can't contact the Store at all - again, we should
      let you retry.
      
      The one potential issue I see here is that if the store goes down, we'll
      massively increase the number of requests we send to the store,
      potentially making it harder to recover. If this is a concern, I can add
      a circuit breaker: if we repeatedly get errors back from the store, back
      off and stop making requests for a while.
      
      * Add a circuit breaker to store API requests
      
      In the pathological case where the store goes down for >5 minutes, the
      cache will expire and all instances everywhere will start repeatedly
      making requests for token validation at once. This might make recovering
      from an outage more difficult.
      
      This adds a circuit breaker around the API request. If the call
      repeatedly throws (5XX errors, socket timeouts, etc.) then we'll pause
      for 1 minute, during which time all calls to token validation will
      immediately fail without making any request to the API.
      
      After one minute, we'll allow one request through to the API. If it
      succeeds, we'll go back to normal operation. Otherwise, we'll wait
      another minute.
      Unverified
      0ef2052f
  6. Sep 27, 2024
  7. Sep 26, 2024
    • lbrdnk's avatar
      Databricks JDBC driver (#42263) · c04928d5
      lbrdnk authored
      * Databricks JDBC driver base
      
      * Add databricks CI job
      
      * WIP data loading -- it works, further cleanup needed
      
      * Cleanup
      
      * Implement ->honeysql to enable data loading
      
      * Hardcode catalog job var
      
      * Implement driver methods and update tests
      
      * Derive hive instead of sql-jdbc
      
      * Cleanup leftovers after deriving hive
      
      * Run databricks tests on push
      
      * Cleanp and enable set-timezone
      
      * Disable database creation by tests
      
      * Add Databricks to broken drivers for timezone tests
      
      * Exclude Databricks from test
      
      * Enable have-select-privilege?-test
      
      * Restore sql-jdbc-drivers-using-default-describe-table-or-fields-impl post rebase
      
      * Restore joined-date-filter-test
      
      * Adjust to work with dataset definition tests
      
      * Adjust alternative date tests
      
      * Remove leftover reflecttion warning set
      
      * Update test exts
      
      * cljfmt vscode
      
      * Add databricks to kondo drivers
      
      * Update metabase-plugin.yaml
      
      * Update databricks_jdbc.clj
      
      * Rework test extensions
      
      * Update general data loading code to work with Databricks
      
      * Reset tests to orig
      
      * Use DateTimeWithLocalTZ for TIMESTAMP database type
      
      * Convert to LocalDateTime for set-parameter
      
      * Update test extensions field-base-type->sql-tyoe
      
      * Update database-type->base-type
      
      * Enable creation of time columns in test data even though not supported
      
      * Fix typo
      
      * Update tests
      
      * Udpate tests
      
      * Update drivers.yml
      
      * Disable dynamic dataset loading tests
      
      * Adjust the iso-8601-text-fields-should-be-queryable-date-test
      
      * Update load-data/row-xform
      
      * Add time type exception to test
      
      * Update test data loading and enable test
      
      * Whitespace
      
      * Enable all driver jobs
      
      * Update comment
      
      * Make catalog mandatory
      
      * Remove comment
      
      * Remove log level from spec generation
      
      * Update sql.qp/datetime-diff
      
      * Update read-column-thunk
      
      * Remove comment
      
      * Simplify date-time->results-local-date-time
      
      * Update comment
      
      * Move definitions
      
      * Update test extension types mapping
      
      * Remove now obsolete ddl/insert-rows-honeysql-form implementation
      
      * Update sql-jdbc.conn/connection-details->spec for perturb-db-details
      
      * Update load-data/do-insert!
      
      * Remove ssh tunnel from driver as tests do not work with it
      
      * Update test
      
      * Promote ::dynamic-dataset-loading to :test/dynamic-dataset-loading and modify corresponding tests
      
      * Adjust to broken TIMESTAMP_NTZ sync
      
      * Update read-column-thunk to return timestamps always in Z
      
      * Comment
      
      * Disable tests for dynamic datasets
      
      * Return spark jobs into drivers.yml
      
      * Update Databricks CI catalog
      
      * Remove vscode cljfmt tweak
      
      * Update iso-8601-text-fields-expected-rows
      
      * Update datetime-diff
      
      * Formatting
      
      * cljfmt
      
      * Add placeholder test
      
      * Remove comment
      
      * cljfmt
      
      * Use EnableArrow=0 connection param
      
      * Remove comment
      
      * Comment
      
      * Update tests
      
      * cljfmt
      
      * Update driver's deps.edn
      
      * Update tests
      
      * Implement alternative `describe-table`
      
      * WIP Workaround for timestamp_ntz sync, will be thrown away probably
      
      * Update metabase-plugin.yaml with schema filters
      
      * Update driver to use schema filters and remove now redundant sync implemnetations
      
      * Update tests
      
      * Update tests extensions
      
      * Update test
      
      * Add feature flags for fast sync
      
      * Implement describe-fields
      
      * Implement describe-fks-sql
      
      * Enable fast sync features
      
      * Use full_data_type
      
      * Comment
      
      * Add exception for timestamp_ntz columns to new sync code
      
      * Implement db-default-timezone
      
      * Add timestamp_ntz ignored test
      
      * Add db-default-timezone-test
      
      * Fix typo
      
      * Update setReadOnly
      
      * Add comment on setAutoCommit
      
      * Update chunk-size
      
      * Add timezone-in-set-and-read-functions-test
      
      * Drop Athena from driver exceptions
      
      * Use set/intersection instead of a filter
      
      * Add explicit fast-sync tests
      
      * Update describe-fields-sql and add comment
      
      * Add preprocess-additional-options
      
      * Add leading semicolon test
      
      * Disable dataset creation and update comment
      
      * Rename driver to `databricks`
      
      * Use old secret names
      
      * Fix wrongly copied hsql list
      
      * Temporarily allow database creation
      
      * Add *allow-database-deletion*
      
      * Temporarily allow database creation
      
      * Disable database creation
      
      * cljfmt
      
      * cljfmt
      Unverified
      c04928d5
    • Chris Truter's avatar
  8. Sep 25, 2024
    • Noah Moss's avatar
      Migrate stats ping to Snowplow (#47823) · 2173e659
      Noah Moss authored
      
      Co-authored-by: default avatarThomas Schmidt <thomas@metabase.com>
      Unverified
      2173e659
    • John Swanson's avatar
      Require `:encryption` for string settings (#48067) · 7e00dc32
      John Swanson authored
      * Require `:encryption` for string settings
      
      For settings that are not typed as JSON, CSV, or strings, encryption now
      defaults to `:no` (*except* if you have
      explicitly marked your setting as `:sensitive?` - these will default to
      `:when-encryption-key-set`).
      
      I went through all our settings and provided what I think are reasonable
      values here. I tried to be conservative - when I wasn't sure whether a
      stored setting was sensitive, I kept it as encrypted. For example, the
      `ldap-port` setting is probably non-sensitive but theoretically someone
      could be using a weird port for security-by-obscurity, so I kept that
      encrypted.
      
      * Change possible values for `:encryption`
      
      `:maybe` was confusing: let's be more explicit that the value will be
      encrypted `:when-encryption-key-set` to make it obvious what actually
      turns encryption on and off.
      Unverified
      7e00dc32
  9. Sep 24, 2024
  10. Sep 20, 2024
  11. Sep 18, 2024
  12. Sep 17, 2024
  13. Sep 13, 2024
    • metamben's avatar
      Fix ordering by the accumulated column (#47875) · 1174d9cc
      metamben authored
      Unverified
      1174d9cc
    • metamben's avatar
      Ensure :effective-type for columns from an aggregation in a card (#47936) · d8786ffc
      metamben authored
      * Ensure :effective-type for columns from an aggregation in a card
      
      Fixes #47184
      
      * Synchronize effective-type with the base-type on override
      
      When :base-type in the column metadata is overridden with the :base-type in the field ref, set it as :effective-type
      too.  If :effective-type is also set in the field ref, it wins.
      Unverified
      d8786ffc
    • appleby's avatar
      Revert to non-truncating mbql.u/unique-name-generator in annotate-native-cols (#47866) · 16e31051
      appleby authored
      
      * Rollback to legacy unique-name-generator in annotate-native-cols
      
      * Add native query long column alias test to nested queries tests
      
      * Add native query long column alias test to dataset api tests
      
      * Add native query long column alias test to models e2e tests
      
      * Move limit into inner native query
      
      * PR suggestion: simplify lambda argslist
      
      Co-authored-by: default avatarmetamben <103100869+metamben@users.noreply.github.com>
      
      * PR suggestion: remove unnecessary mt/with-temp-copy-of-db
      
      * Tweak test name
      
      ---------
      
      Co-authored-by: default avatarmetamben <103100869+metamben@users.noreply.github.com>
      Unverified
      16e31051
    • dpsutton's avatar
      Controlled upgrades (#47877) · 1addcc19
      dpsutton authored
      
      * Initial commit of controlled upgrades
      
      - new setting upgrade-threshold (`MB_UPGRADE_THRESHOLD`)
        number 0-100
      - conditionally removing latest from upgrade checks
      
      AS OF RIGHT NOW IT ALWAYS REMOVES the latest.
      Should help FE make this optional
      
      * dumb mistake
      
      * handle no latest version info
      
      * implement and basic test
      
      * More tests, refactor name
      
      * more tests
      
      * cljfmt does not like commas in source code.
      
      * Just move site-uuid to the declaration and don't declare it
      
      nothing is sacred in the settings order. no reason. i think i was just
      trying to minimize the diff, which a dumb and ignoble goal.
      
      * add context strings
      
      * update unit tests
      
      * use display version
      
      * Have upgrade threshold include current major version
      
      hopefully helps rotate people from early to late, and late to early
      across major versions. Idea from sanya and quite nice!
      
      ---------
      
      Co-authored-by: default avatarRyan Laurie <iethree@gmail.com>
      Unverified
      1addcc19
  14. Sep 12, 2024
  15. Sep 11, 2024
    • Alexander Solovyov's avatar
    • John Swanson's avatar
      Fix some minor collections issues (#47472) · 0c4a3f8a
      John Swanson authored
      * Fix some minor collections issues
      
      - explicitly provide access to the trash to all users, in the same way
      we provide access to personal collections. Due to migration order, we
      don't necessarily have a permissions row with the correct `perm_type`,
      `perm_value`, and `collection_id` for the Trash collection. That's ok -
      we don't actually move things to the Trash, so there isn't any
      item where `collection_id=$trash.id` - but this may affect things like
      effective ancestors for children of the trash. Let's be explicit about
      the permissions that users have.
      
      - replace a case where we manually calculated effective location and
      then got the parent_id with just hydrating `:effective_parent`. This is
      more efficient.
      
      - replace the simple hydration method for `effective-location-path` with
      a batched hydration method that fetches `visible-collection-ids` *once*
      and then uses it to figure out the effective location path for each
      collection passed.
      Unverified
      0c4a3f8a
    • Cam Saul's avatar
  16. Sep 10, 2024
  17. Sep 09, 2024
Loading