This project is mirrored from https://github.com/metabase/metabase.
Pull mirroring updated .
- Oct 07, 2024
-
-
Case Nelson authored
* feat: move auth providers behind ee token Fixes #48235 Introduces new premium feature `database-auth-providers`. Moves fetch-auth behind defenterprise - oss will always return an empty map. Add metabase.util.http to test outbound http requests. * Fix broken refs * Drop defmethod as adhoc overrides aren't desirable outside ee * Drop unessary require * Fix token and tests * Fix tests * Fix formatting * Fix var cast exception * Fix connection test * Move test to ee namespace * Move more tests behind enterprise * Fix checked-section hiding
-
- Oct 04, 2024
-
-
Ryan Laurie authored
* add update channels in product * support for changing release notes to show beta and nightly info * dont export setting * obey the linter and add tests * export setting * update e2e tests * clojure magic * clojure-foo * better localization * sorry mr linter * add more tests
-
Ngoc Khuat authored
-
- Oct 03, 2024
-
-
lbrdnk authored
* Use context for field id computation while hydrating dashbaord * Update docstring * Fix target * Update lookup * Fix param-target usage * Unskip e2e * Bind *param-id-context* in public dashboard compputation * Fix context update * Fix field-ids->param-field-values-ignoring-current-user * Add tests * Comments * cljfmt * Update field-ids->param-field-values-ignoring-current-user * Update src/metabase/models/params.clj Co-authored-by:
Braden Shepherdson <braden@metabase.com> * Update src/metabase/models/params.clj Co-authored-by:
Braden Shepherdson <braden@metabase.com> * Use atom instead of volatile! * Avoid dynamic function var * Update src/metabase/models/params.clj Co-authored-by:
Braden Shepherdson <braden@metabase.com> * Address remarks --------- Co-authored-by:
Braden Shepherdson <braden@metabase.com>
-
Chris Truter authored
-
- Oct 02, 2024
-
-
appleby authored
* Add BE test for exporting self-joined renamed columns * Add e2e test for exporting self-joined renamed columns See also - Not renamed fields in a same table join inherit the renamed name in exports #48046 - backport: Support both name & field ref-based column keys in viz settings on read and upgrade on write #48243
-
- Oct 01, 2024
-
-
Chris Truter authored
-
Chris Truter authored
-
Chris Truter authored
-
- Sep 30, 2024
-
-
John Swanson authored
* Do not cache all token check failures We want to cache token checks to avoid an issue where we repeatedly ask the store "hey, is this token valid?? is this token valid?? is this token valid??" for the same token. However, transient errors can also occur. For example, maybe a network issue causes the HTTP request to fail entirely. In this case, if we cache the result, the user needs to restart metabase (or wait 5 minutes until the cache is cleared) before they can attempt to validate their token again. This PR moves the cache logic deeper into the stack. We want to cache "successful" responses from the store API - cases where the store has told us categorically that the token is or is not valid. We don't want or need to cache other things that might happen. Maybe your token isn't the right length - we can recalculate that, it's ok. Maybe you get a 503 error from the Store - we should let you retry. Maybe your network is having issues and you can't contact the Store at all - again, we should let you retry. The one potential issue I see here is that if the store goes down, we'll massively increase the number of requests we send to the store, potentially making it harder to recover. If this is a concern, I can add a circuit breaker: if we repeatedly get errors back from the store, back off and stop making requests for a while. * Add a circuit breaker to store API requests In the pathological case where the store goes down for >5 minutes, the cache will expire and all instances everywhere will start repeatedly making requests for token validation at once. This might make recovering from an outage more difficult. This adds a circuit breaker around the API request. If the call repeatedly throws (5XX errors, socket timeouts, etc.) then we'll pause for 1 minute, during which time all calls to token validation will immediately fail without making any request to the API. After one minute, we'll allow one request through to the API. If it succeeds, we'll go back to normal operation. Otherwise, we'll wait another minute.
-
- Sep 27, 2024
-
-
Noah Moss authored
-
Braden Shepherdson authored
This was old logic to support certain drivers (eg. pre-JDBC Druid) and isn't required for most. It's perfectly sound to filter or even break out on a datetime column without bucketing. Adds a new `:temporal/requires-default-unit` driver feature, and enables it only for the legacy Druid driver. Fixes #47341
-
adam-james authored
Fixes 46575 Creating a Pivot Table Question that is based off of a model that has at least one column derived from a join failed to display row totals. This is because the pivot-options map was being mis-calculated; not all column indices were correctly found/passed in to the :pivot-rows or :pivot-cols keys, causing the pivot query not to compute all necessary data. Here, I just modify the :lib/source key of the columns whose source is a card (as determined by the existence of :lib/card-id). The columns being checked will all have :source/breakout, which caused, in the issue's repro example, the "NAME" column to be missed. If it instead has :lib/source :source/card, the logic inside `lib/find-matching-column` works.
-
Chris Truter authored
-
Chris Truter authored
-
- Sep 26, 2024
-
-
lbrdnk authored
* Databricks JDBC driver base * Add databricks CI job * WIP data loading -- it works, further cleanup needed * Cleanup * Implement ->honeysql to enable data loading * Hardcode catalog job var * Implement driver methods and update tests * Derive hive instead of sql-jdbc * Cleanup leftovers after deriving hive * Run databricks tests on push * Cleanp and enable set-timezone * Disable database creation by tests * Add Databricks to broken drivers for timezone tests * Exclude Databricks from test * Enable have-select-privilege?-test * Restore sql-jdbc-drivers-using-default-describe-table-or-fields-impl post rebase * Restore joined-date-filter-test * Adjust to work with dataset definition tests * Adjust alternative date tests * Remove leftover reflecttion warning set * Update test exts * cljfmt vscode * Add databricks to kondo drivers * Update metabase-plugin.yaml * Update databricks_jdbc.clj * Rework test extensions * Update general data loading code to work with Databricks * Reset tests to orig * Use DateTimeWithLocalTZ for TIMESTAMP database type * Convert to LocalDateTime for set-parameter * Update test extensions field-base-type->sql-tyoe * Update database-type->base-type * Enable creation of time columns in test data even though not supported * Fix typo * Update tests * Udpate tests * Update drivers.yml * Disable dynamic dataset loading tests * Adjust the iso-8601-text-fields-should-be-queryable-date-test * Update load-data/row-xform * Add time type exception to test * Update test data loading and enable test * Whitespace * Enable all driver jobs * Update comment * Make catalog mandatory * Remove comment * Remove log level from spec generation * Update sql.qp/datetime-diff * Update read-column-thunk * Remove comment * Simplify date-time->results-local-date-time * Update comment * Move definitions * Update test extension types mapping * Remove now obsolete ddl/insert-rows-honeysql-form implementation * Update sql-jdbc.conn/connection-details->spec for perturb-db-details * Update load-data/do-insert! * Remove ssh tunnel from driver as tests do not work with it * Update test * Promote ::dynamic-dataset-loading to :test/dynamic-dataset-loading and modify corresponding tests * Adjust to broken TIMESTAMP_NTZ sync * Update read-column-thunk to return timestamps always in Z * Comment * Disable tests for dynamic datasets * Return spark jobs into drivers.yml * Update Databricks CI catalog * Remove vscode cljfmt tweak * Update iso-8601-text-fields-expected-rows * Update datetime-diff * Formatting * cljfmt * Add placeholder test * Remove comment * cljfmt * Use EnableArrow=0 connection param * Remove comment * Comment * Update tests * cljfmt * Update driver's deps.edn * Update tests * Implement alternative `describe-table` * WIP Workaround for timestamp_ntz sync, will be thrown away probably * Update metabase-plugin.yaml with schema filters * Update driver to use schema filters and remove now redundant sync implemnetations * Update tests * Update tests extensions * Update test * Add feature flags for fast sync * Implement describe-fields * Implement describe-fks-sql * Enable fast sync features * Use full_data_type * Comment * Add exception for timestamp_ntz columns to new sync code * Implement db-default-timezone * Add timestamp_ntz ignored test * Add db-default-timezone-test * Fix typo * Update setReadOnly * Add comment on setAutoCommit * Update chunk-size * Add timezone-in-set-and-read-functions-test * Drop Athena from driver exceptions * Use set/intersection instead of a filter * Add explicit fast-sync tests * Update describe-fields-sql and add comment * Add preprocess-additional-options * Add leading semicolon test * Disable dataset creation and update comment * Rename driver to `databricks` * Use old secret names * Fix wrongly copied hsql list * Temporarily allow database creation * Add *allow-database-deletion* * Temporarily allow database creation * Disable database creation * cljfmt * cljfmt
-
Chris Truter authored
-
- Sep 25, 2024
-
-
Noah Moss authored
Co-authored-by:
Thomas Schmidt <thomas@metabase.com>
-
John Swanson authored
* Require `:encryption` for string settings For settings that are not typed as JSON, CSV, or strings, encryption now defaults to `:no` (*except* if you have explicitly marked your setting as `:sensitive?` - these will default to `:when-encryption-key-set`). I went through all our settings and provided what I think are reasonable values here. I tried to be conservative - when I wasn't sure whether a stored setting was sensitive, I kept it as encrypted. For example, the `ldap-port` setting is probably non-sensitive but theoretically someone could be using a weird port for security-by-obscurity, so I kept that encrypted. * Change possible values for `:encryption` `:maybe` was confusing: let's be more explicit that the value will be encrypted `:when-encryption-key-set` to make it obvious what actually turns encryption on and off.
-
- Sep 24, 2024
-
-
Chris Truter authored
-
Chris Truter authored
-
- Sep 20, 2024
-
-
Chris Truter authored
* Return the table id from csv upload (as a header) * Add date format that google sheets uses * Smuggle id * done? * comment * with empty h2 db * dashing * fix flaky test * Parse the correct date class --------- Co-authored-by:
Alexander Solovyov <alexander@solovyov.net>
-
- Sep 18, 2024
-
-
Chris Truter authored
-
- Sep 17, 2024
-
-
Chris Truter authored
-
adam-james authored
* Pivot Exports/Downloads Should Not include pivot-grouping or 'Extra' Rows Fixes 46561 When downloading a pivot table's data, a column 'pivot-grouping' is included as well as 'extra' rows that correspond to various totals in the table. This PR removes the pivot-grouping column and any of these 'extra' rows from the downloads/exports. * fix xlsx * Fix csv * fix json * add test and fix some failing tests * fmt * Update src/metabase/query_processor/streaming/csv.clj Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com> * static viz table renders filter out pivot-grouping * address review feedback * add some comments to try explain the changes --------- Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com>
-
- Sep 13, 2024
-
-
metamben authored
-
metamben authored
* Ensure :effective-type for columns from an aggregation in a card Fixes #47184 * Synchronize effective-type with the base-type on override When :base-type in the column metadata is overridden with the :base-type in the field ref, set it as :effective-type too. If :effective-type is also set in the field ref, it wins.
-
appleby authored
* Rollback to legacy unique-name-generator in annotate-native-cols * Add native query long column alias test to nested queries tests * Add native query long column alias test to dataset api tests * Add native query long column alias test to models e2e tests * Move limit into inner native query * PR suggestion: simplify lambda argslist Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com> * PR suggestion: remove unnecessary mt/with-temp-copy-of-db * Tweak test name --------- Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com>
-
dpsutton authored
* Initial commit of controlled upgrades - new setting upgrade-threshold (`MB_UPGRADE_THRESHOLD`) number 0-100 - conditionally removing latest from upgrade checks AS OF RIGHT NOW IT ALWAYS REMOVES the latest. Should help FE make this optional * dumb mistake * handle no latest version info * implement and basic test * More tests, refactor name * more tests * cljfmt does not like commas in source code. * Just move site-uuid to the declaration and don't declare it nothing is sacred in the settings order. no reason. i think i was just trying to minimize the diff, which a dumb and ignoble goal. * add context strings * update unit tests * use display version * Have upgrade threshold include current major version hopefully helps rotate people from early to late, and late to early across major versions. Idea from sanya and quite nice! --------- Co-authored-by:
Ryan Laurie <iethree@gmail.com>
-
- Sep 12, 2024
-
-
Alexander Polyankin authored
-
Chris Truter authored
-
- Sep 11, 2024
-
-
Alexander Solovyov authored
-
John Swanson authored
* Fix some minor collections issues - explicitly provide access to the trash to all users, in the same way we provide access to personal collections. Due to migration order, we don't necessarily have a permissions row with the correct `perm_type`, `perm_value`, and `collection_id` for the Trash collection. That's ok - we don't actually move things to the Trash, so there isn't any item where `collection_id=$trash.id` - but this may affect things like effective ancestors for children of the trash. Let's be explicit about the permissions that users have. - replace a case where we manually calculated effective location and then got the parent_id with just hydrating `:effective_parent`. This is more efficient. - replace the simple hydration method for `effective-location-path` with a batched hydration method that fetches `visible-collection-ids` *once* and then uses it to figure out the effective location path for each collection passed.
-
Cam Saul authored
-
- Sep 10, 2024
-
-
metamben authored
Fixes #47735
-
John Swanson authored
* Don't encrypt `site-locale` * Fix test for `site-locale` The previous test required encryption. Theoretically the referenced error just depends on the fact that something is logged during the process of getting the value, so I reworked it to depend on that directly rather than triggering it through an encryption key rotation.
-
Alexander Polyankin authored
-
Chris Truter authored
-
Chris Truter authored
-
- Sep 09, 2024
-
-
Braden Shepherdson authored
Fixes #45938. Remapping is handled at the top level of the query, and should be ignored for inner queries. This PR hides it from `results_metadata` on source queries. It also fixes the interaction with `large-int-id` logic to make big integers JS-safe.
-