This project is mirrored from https://github.com/metabase/metabase.
Pull mirroring updated .
- Oct 22, 2024
-
-
Alexander Solovyov authored
it's been long ago that Ring has been updated to depend on org.apache.commons/commons-fileupload2-core
-
Luis Paolini authored
There was a new version that supports jetty 11.0.24 https://github.com/ring-clojure/ring/blob/master/CHANGELOG.md
-
- Oct 21, 2024
-
-
Chris Truter authored
-
Oleksandr Yakushev authored
-
- Oct 16, 2024
-
-
Chris Truter authored
-
Ngoc Khuat authored
-
- Oct 14, 2024
-
-
Ngoc Khuat authored
* [Notification] Migrate user invited email (#48215) * [Notification] Migrate alert create email (#48292) * [Notification] Migrate slack token error email (#48333)
-
Cam Saul authored
* Collapse `metabase.shared.*` namespaces * Fix Kondo warnings * Does updating the stories-data keys fix the failing tests? * Appease msgcat * Appease msgcat * Fix typo * Make the build happy * Appease fslint
-
- Oct 08, 2024
-
-
lbrdnk authored
* Address initial remarks * Extract hive-like to separate module and set it as dependency * Remove hive-like also from spark
-
Cam Saul authored
* Parallel driver tests PoC * Set fail-fast to false for now * Try splitting up non-driver tests to see how broken tests are * Whoops fix plain BE tests * Ok nvm I'll test this in another branch * Fix fail-fast * Experiment with the improved Hawk split logic * Fix some broken/flaky tests * Experiment: try splitting MySQL 8 tests into FOUR jobs * Divide other Postgres and MySQL tests up and use num-partitions = 2 * Another test fix
* Flaky test fix * Try making more stuff fast * Make athena fast?? * Fix a few more things * Test fixes? * Fix configs * Fix Mongo job syntax * Fix busted test from #46942 * Fix Mongo config again * wait-for-port needs to specify shell I guess * More cleanup * await-port can't have a timeout-minutes I guess * Let's only parallelize MySQL for now. * Cleanup action * Cleanup wait-for-port action * Fix another flaky test * NOW driver tests will be FAST * Need to mark driver tests too * Fix wrong tag * Use Hawk 1.0.5 * Fix busted metabase.public-settings-test/landing-page-setting-test * Fix busted `metabase.api.database-test/get-database-test` etc. (hopefully) * Fix busted `metabase.sync.sync-metadata.fields-test/sync-fks-and-fields-test` for Oracle * Maybe this fixed `metabase.query-processor.middleware.permissions-test/e2e-ignore-user-supplied-perms-test` maybe not * Fix busted metabase.api.dashboard-test/dependent-metadata-test because endpoint had differemt sort order than test * Ok my test fix did not work * Fix metabase.sync.sync-metadata.fields-test/sync-fks-and-fields-test for Redshift * Better test name * More test fixes * Schema fix * PR feedback * Split off test partitioning into separate PR * Fix failing Oracle tests * Another round of test fixes, hopefully * Fix failing Redshift tests * Maybe the last round of test fixes * Fix Oracle * Fix stray line
-
- Sep 30, 2024
-
-
John Swanson authored
* Do not cache all token check failures We want to cache token checks to avoid an issue where we repeatedly ask the store "hey, is this token valid?? is this token valid?? is this token valid??" for the same token. However, transient errors can also occur. For example, maybe a network issue causes the HTTP request to fail entirely. In this case, if we cache the result, the user needs to restart metabase (or wait 5 minutes until the cache is cleared) before they can attempt to validate their token again. This PR moves the cache logic deeper into the stack. We want to cache "successful" responses from the store API - cases where the store has told us categorically that the token is or is not valid. We don't want or need to cache other things that might happen. Maybe your token isn't the right length - we can recalculate that, it's ok. Maybe you get a 503 error from the Store - we should let you retry. Maybe your network is having issues and you can't contact the Store at all - again, we should let you retry. The one potential issue I see here is that if the store goes down, we'll massively increase the number of requests we send to the store, potentially making it harder to recover. If this is a concern, I can add a circuit breaker: if we repeatedly get errors back from the store, back off and stop making requests for a while. * Add a circuit breaker to store API requests In the pathological case where the store goes down for >5 minutes, the cache will expire and all instances everywhere will start repeatedly making requests for token validation at once. This might make recovering from an outage more difficult. This adds a circuit breaker around the API request. If the call repeatedly throws (5XX errors, socket timeouts, etc.) then we'll pause for 1 minute, during which time all calls to token validation will immediately fail without making any request to the API. After one minute, we'll allow one request through to the API. If it succeeds, we'll go back to normal operation. Otherwise, we'll wait another minute.
-
- Sep 26, 2024
-
-
lbrdnk authored
* Databricks JDBC driver base * Add databricks CI job * WIP data loading -- it works, further cleanup needed * Cleanup * Implement ->honeysql to enable data loading * Hardcode catalog job var * Implement driver methods and update tests * Derive hive instead of sql-jdbc * Cleanup leftovers after deriving hive * Run databricks tests on push * Cleanp and enable set-timezone * Disable database creation by tests * Add Databricks to broken drivers for timezone tests * Exclude Databricks from test * Enable have-select-privilege?-test * Restore sql-jdbc-drivers-using-default-describe-table-or-fields-impl post rebase * Restore joined-date-filter-test * Adjust to work with dataset definition tests * Adjust alternative date tests * Remove leftover reflecttion warning set * Update test exts * cljfmt vscode * Add databricks to kondo drivers * Update metabase-plugin.yaml * Update databricks_jdbc.clj * Rework test extensions * Update general data loading code to work with Databricks * Reset tests to orig * Use DateTimeWithLocalTZ for TIMESTAMP database type * Convert to LocalDateTime for set-parameter * Update test extensions field-base-type->sql-tyoe * Update database-type->base-type * Enable creation of time columns in test data even though not supported * Fix typo * Update tests * Udpate tests * Update drivers.yml * Disable dynamic dataset loading tests * Adjust the iso-8601-text-fields-should-be-queryable-date-test * Update load-data/row-xform * Add time type exception to test * Update test data loading and enable test * Whitespace * Enable all driver jobs * Update comment * Make catalog mandatory * Remove comment * Remove log level from spec generation * Update sql.qp/datetime-diff * Update read-column-thunk * Remove comment * Simplify date-time->results-local-date-time * Update comment * Move definitions * Update test extension types mapping * Remove now obsolete ddl/insert-rows-honeysql-form implementation * Update sql-jdbc.conn/connection-details->spec for perturb-db-details * Update load-data/do-insert! * Remove ssh tunnel from driver as tests do not work with it * Update test * Promote ::dynamic-dataset-loading to :test/dynamic-dataset-loading and modify corresponding tests * Adjust to broken TIMESTAMP_NTZ sync * Update read-column-thunk to return timestamps always in Z * Comment * Disable tests for dynamic datasets * Return spark jobs into drivers.yml * Update Databricks CI catalog * Remove vscode cljfmt tweak * Update iso-8601-text-fields-expected-rows * Update datetime-diff * Formatting * cljfmt * Add placeholder test * Remove comment * cljfmt * Use EnableArrow=0 connection param * Remove comment * Comment * Update tests * cljfmt * Update driver's deps.edn * Update tests * Implement alternative `describe-table` * WIP Workaround for timestamp_ntz sync, will be thrown away probably * Update metabase-plugin.yaml with schema filters * Update driver to use schema filters and remove now redundant sync implemnetations * Update tests * Update tests extensions * Update test * Add feature flags for fast sync * Implement describe-fields * Implement describe-fks-sql * Enable fast sync features * Use full_data_type * Comment * Add exception for timestamp_ntz columns to new sync code * Implement db-default-timezone * Add timestamp_ntz ignored test * Add db-default-timezone-test * Fix typo * Update setReadOnly * Add comment on setAutoCommit * Update chunk-size * Add timezone-in-set-and-read-functions-test * Drop Athena from driver exceptions * Use set/intersection instead of a filter * Add explicit fast-sync tests * Update describe-fields-sql and add comment * Add preprocess-additional-options * Add leading semicolon test * Disable dataset creation and update comment * Rename driver to `databricks` * Use old secret names * Fix wrongly copied hsql list * Temporarily allow database creation * Add *allow-database-deletion* * Temporarily allow database creation * Disable database creation * cljfmt * cljfmt
-
- Sep 10, 2024
-
-
Chris Truter authored
-
Chris Truter authored
-
- Sep 06, 2024
-
-
Alexander Solovyov authored
-
- Sep 04, 2024
-
-
Cam Saul authored
* Bump Kondo version to 2024.08.29 * Fix Kondo warnings from new version of Kondo * Fix failing tests because schema was broken
-
- Sep 02, 2024
-
-
metamben authored
* Implement a simple greedy approximation for dedupe-joins
-
Tim Macdonald authored
Includes https://github.com/metabase/macaw/pull/97, which will prevent some false positives
-
- Aug 31, 2024
-
-
Oleksandr Yakushev authored
* perf: Use optimized stats functions for computing insights * perf: Rewrite fingerprinters/with-error-handling to not generate closures * fix: Don't reuse global-fingerprinter object
-
- Aug 30, 2024
-
-
Tim Macdonald authored
-
- Aug 29, 2024
-
-
Cam Saul authored
* Add `clojure -M:kondo` and `clojure -M:kondo:kondo/all` and bump version * Fix Kondo errors * Fix Kondo+LSP issues with `defendpoint`, `defenterprise`, etc. * Use replace-deps instead of deps for speed * Ok apparently maybe we do need to copy configs when we run Kondo on CI * Oops `./bin/kondo.sh` should not try to use `clj-kondo` * Remove references to GA driver folders
-
- Aug 28, 2024
-
-
Noah Moss authored
-
- Aug 22, 2024
-
-
Cam Saul authored
-
Chris Truter authored
-
- Aug 21, 2024
-
-
Cam Saul authored
* Cljfmt config part 2 * Backport updated config and linter fork from part 3 * Update formatting
-
Chris Truter authored
-
- Aug 20, 2024
-
-
Cam Saul authored
* Cljfmt * Fix new GH action
-
- Aug 12, 2024
-
-
Uladzimir Havenchyk authored
-
- Aug 07, 2024
-
- Aug 06, 2024
-
-
Oleksandr Yakushev authored
* perf: Optimize validation * Bump Malli version * perf: Optimize validation
-
- Jul 16, 2024
-
-
Cam Saul authored
* Debug QP improvements from today's eng demo * More taps * Fixes
-
Ngoc Khuat authored
-
- Jul 12, 2024
-
-
bryan authored
* parallelize coll perm graph group lookup w/ core.async * respond to code review * add + uses ExecutorCompletionService map function * rename test * more tests * no concurrency for cljs * fix timeout + add more tests * use claypoole * cleanup * add kondo hooks for claypoole * cleanup * Can't use libraries that don't have a license Revert "add kondo hooks for claypoole" This reverts commit 9d93b55b28d69267b65260f2b046420d67604361. * ignore unused value -- TODO: check back in on claypoole to see if the kondo config gets merged * fix linter
-
- Jul 10, 2024
-
-
Chris Truter authored
A bunch of small improvements, notably a timeout and support for MSSQL square bracket quotes.
-
- Jun 28, 2024
-
-
adam-james authored
* Wrap non-latin characters in a span specifying working font Fixes: #38753 CSSBox seems to have a bug where font fallback doesn't work properly. This is noticeable when a font does not contain glyphs that are present in the string being rendered. For example, Lato does not have many international characters, so the rendered version of tables (that show up in Slack messages) will not render properly, making the card unreadable. Since this appears to be a downstream bug, I've opted to work around this limitation by wrapping any non-latin characters in a <span> that specifies the font family to be sans-serif, which should contain the glyphs to properly render. This leaves Lato in place for other characters. For now, I figured it's worth trying this solution over using Noto for 2 reasons: - we can keep Lato, which has been the decided font style for the app for some time (this keeps things consistent where possible) - the Noto font containing all glyphs is quite a large font (>20mb, I think) and it would be nice to avoid bundling that if we can. * stub installed fonts test * typo * Do wrapping, but now per-string, and in Clojure data not html string I've decided that a reasonable solution is to still wrap strings containing non-lato characters. But it's not done with str/replace to the html string but rather in a postwalk over the Hiccup data prior to rendering. Seems like a decent compromise of issues without patching CSSBox or fixing upstream (may be good to do, but will take longer to get a fix in). * add test checking that glyphs render correctly * Add a test that directly checks the wrapping fn * Change the string to keep the linter quiet * Change how we check if string can be rendered to faster method, new Lato Font With Sanya's help, the way I check if a given string is renderable with Lato is now faster. Also use the full Lato font, not just the 'latin' lato so we can cover more chars * change lato so that it loads the fn even in a fresh test run
-
- Jun 26, 2024
-
-
Chris Truter authored
-
- Jun 24, 2024
-
-
Case Nelson authored
* fix: dont always optimize between expressions Fixes #42291 The frontend produces expressions like ``` [:between [+ [:field ... {:temporal-bucket :day] [:interval 2 :day]] [:relative-datetime -1 :week] [:relative-datetime 0 :week]] ``` This should not be optimized because of the mixed `:day` and `:week` units. However, it was being optimized since the compatible units weren't being properly picked up by the match. * Disable suspcicios args eastwood lint, kondo does it better * Stop autobucket to day when adding a time interval
-
- Jun 18, 2024
-
-
Ngoc Khuat authored
-
- Jun 13, 2024
-
-
Chris Truter authored
-