This project is mirrored from https://github.com/metabase/metabase.
Pull mirroring updated .
- Oct 16, 2024
-
-
Chris Truter authored
-
Noah Moss authored
-
Alexander Solovyov authored
-
Ngoc Khuat authored
-
- Oct 15, 2024
-
-
bryan authored
* adds data + schema for metrics stats ping * remove comment * annotate todos * fill in the rest of the metrics values - and add defaults * fix some definitions + use a single timestamp * shuffle stuff around to appease the namespace linter - we aren't reaching into the api API, so the linter is more of a formality here. * update docstring * fix jsonschema, maybe * review responses - revert changes to 1-0-0 - add metrics section to new file: 1-0-1 - bump ::instance_stats to "1-0-1" - add tags into 1-0-1 - make the code return tags (empty for now until we know what to tag things.) - also fix test that broke from shuffling settings around * remove a footgun * add tags to metrics, add grouped_metrics to jsonschema 1-0-1 format 1-0-1 * indent * cljfmt * version should match filename * update instance stats 1-0-1 schema * require `tags` in grouped_metric + snowcat comment * fix formatting noise * update schema to make it valid * remove grouped_metrics from instance_stats schema * cljfmt appeasement * unbin cache_num_queries_cached value - alphabetize metrics generation * we can now guarantee metric values will be ints * jsonschema for instance_uuid, settings, and grouped_metrics * add analytics_uuid and make it required * lint + alphabetize instance stats json schema * update setting key type - add maxLength to some strings * lint jsonschema - Add description to setting.items.tags - add maxLength, and {min,max}imum to setting.items.value * Bump instance stats to 2-0-0 - remove analytics_uuid string length == 36 check - adds assertion to ensure required fields are set (and it passes) - adds info for embedding settings * adds a grouped-metric to stats ping - adds length info to the schema to pass jsonschema linting * cljfmt
-
adam-james authored
* Incremental Pivot Processing for Exports WIP Fixes pivot exports for CSV and xlsx. The CSV export should use less memory by incrementally building up the data structure and aggregating necessary row data right away, so the memory overhead becomes only as large as the total pivot result. In cases where the pivot rows/cols do combine into many many columns and rows, this can still be a large set of data, but it should behave much better now in most cases. The Excel export is a little more straightforward: create the export rows in the same fashion, streaming one row at a time, and just post-process the sheet to add the pivot table in one shot at the end. * WIP adding row totals. * aggregate totals as rows are added Row, column, section, and grand totals are all aggregated as each row is added. This means the final step of building pivot output becomes just an exercise of lookups/arrangement, no further aggregation is needed. * CSV pivot works per-row, export respects formatting This is a big step forward; we don't need to hold the entire dataset in memory, we instead aggregate a row's data into the pivot datastructure, which only holds onto: - unique values for each pivot-row in a sorted set - unique values for each pivot-col in a sorted set - grand total for each measure N values, where N is number of measures, ususally 1 or 2 - row totals for each combination of each pivot-row * N measures - col totals for each combination of each pivot-col * N measures - totals for each 'section', determined by unique values of first pivot-row * N measures - values for each measure in every 'cell'; Row Combos * Col Combos * N Measures So, there can still be a decent amount of data to store; but it will never hold onto all of the 'raw rows' from the dataset. We can never completely guarantee that Row Combos * Col Combos * N Measures remains small, but two things let us move forward anyway: - there's now visible feedback in the app that the download is running (or if it's failed) - Pivot table utility diminishes rapidly with huge output anyway; users still need to curate/set up their data - effectively to improve the table's utility, so we can assume that a slow-to-download pivot table is also slow to - use/less effective, and will likely be something the user doesn't want (as often). * some test fixes * now, if we export 'raw pivot rows', they don't show pivot-grouping and they also don't include the 'extra' rows for totals/subtotals/grand totals (any row with pivot-grouping > 0). This means that now the non-pivot version of a pivot table export will match what a user sees if they change the viz to a regular table. * remove old test * re-incorporate some changes from master * fix csv for non-pivots due to oversight in my changes This is just a temporary change, I think I should clean up this bit of the code a little, I can probably make it a little more readable and use some cleaner logic regarding if the rows are 'raw pivot rows' or not. * start moving format_rows to POST bod, add pivot_results too There's still wiring work to do, but this starts to add format_rows and pivot_results to POST body for the various API endpoints. Also modify tests to improve coverage/consistency across downloads and alerts/subscriptions. The tests will not pass on this commit, but fixes will be incoming * native pivot tables in xlsx * add precondition to pass migration linter * try to get migrations fixed * pasing pivot-results through api and attachments * fix tests for format_rows in BODY vs query param * tests! * might have the tests all fixed now * the pivoted export now respects col/row totals settings * add test coverage for public questions and dashboards * col and row totals work as expected * build-pivot refactor for clarity * docstring change + tiny refactor in helper fn * see if dashcard download works with format_rows * csv pivot handles nil values * pass format_rows and pivot_results in :params not :body * fix some other tests * pivot-grouping col filtered out of xlsx * pivot-grouping-col removed for all rows * configurable pivot exports and attachments (#47880) * exports fe * specs * ui * specs * format/unformatted now works for xlsx * format test changes for xlsx formatting * embedding endpoints accept pivot_results * cljfmt and eslint fix * empty * embedding test should have formatting defaulted to true * embed test fixes * Use `Chip` for export settings widget * downloads e2e test fix * fix public download limit test * public card download defaults * fix public download defaults in some tests * Fix visual test --------- Co-authored-by:
Aleksandr Lesnenko <alxnddr@users.noreply.github.com> Co-authored-by:
Noah Moss <32746338+noahmoss@users.noreply.github.com> Co-authored-by:
Anton Kulyk <kuliks.anton@gmail.com>
-
Aleksandr Lesnenko authored
* fix iframe dashcards crash subscriptions * add a test to ensure iframes are filtered out of subscriptions --------- Co-authored-by:
Adam James <adam.vermeer2@gmail.com> Co-authored-by:
adam-james <21064735+adam-james-v@users.noreply.github.com>
-
Ngoc Khuat authored
-
- Oct 14, 2024
-
-
Ngoc Khuat authored
* [Notification] Migrate user invited email (#48215) * [Notification] Migrate alert create email (#48292) * [Notification] Migrate slack token error email (#48333)
-
metamben authored
* Implement inactive field removal
-
Cam Saul authored
* Collapse `metabase.shared.*` namespaces * Fix Kondo warnings * Does updating the stories-data keys fix the failing tests? * Appease msgcat * Appease msgcat * Fix typo * Make the build happy * Appease fslint
-
John Swanson authored
* Remove `MB_API_KEY` env var A bit awkwardly, we never set `:deprecated` on the setting before. We can retroactively deprecate this as of v50. I'm keeping the setting purely to emit the warning message on startup.
-
John Swanson authored
* version and channel query params for version info https://github.com/metabase/metabase/issues/48615 * omit blanks from query params for version info
-
Cam Saul authored
* Experimental: try splitting MySQL test jobs into 4 partitions intstead of 2 * user-http-request should make sure users are initialized * Fix MySQL deadlocks in tests * Bump init timeout to 90 seconds * Fix metabase.api.session-test/logout-test
-
- Oct 11, 2024
-
-
Ngoc Khuat authored
-
- Oct 10, 2024
-
-
appleby authored
* Relax the arg types to ExpressionArg for concat expressions in the legacy schema Relax the arg types to ExpressionArg for concat since many DBs allow to concatenate non-string types. This also aligns with the corresponding MLv2 schema and with the reference docs we publish. Fixes #39439 * Add nested concat schema tests * Add nested-concat query-processor tests
-
dpsutton authored
* Always startup prometheus metrics previously only started up when a port was provided ``` MB_PROMETHEUS_SERVER_PORT=9191 java -jar metabase.jar ``` But these counters are useful to be included in anonymous stats. So let's start up the collectors and then we can get their values like: ```clojure prometheus=> (dotimes [_ 500] (inc :metabase-email/messages)) nil ;; prometheus/value is iapetos.core/value here prometheus=> (-> system :registry :metabase-email/messages prometheus/value) 501.0 ``` * rename `metabase.analytics.prometheus/inc` to `inc!` it side effects a value and now no longer shadows `clojure.core/inc` so we're all happy
-
Ngoc Khuat authored
* [Notification] Notification and subscription (#47707) * [Notification] Notification and subscription (#47707) * [Notification] Handlers + recipients (#47759) * [Notification] Channel template table and model (#47782) * [Notification] Render system event emails (#47859) * [Notification] Strict type for channel template and notification recipient (#47910) * [Notification] Event hydration (#47953) * [Notification] Send asynchronously (#48200)
-
- Oct 09, 2024
- Oct 08, 2024
-
-
dpsutton authored
https://github.com/metabase/metabase/issues/41919#issuecomment-2400542908 This issue goes away in 5.3.1 of apache poi. Seems like they have a yearly release cadence so we can ensure this exists for the time being and then remove this when we can bump to 5.3.1 ```diff index b763a1ffdf..c87992c935 100644 --- a/deps.edn +++ b/deps.edn @@ -122,7 +122,7 @@ {:mvn/version "2.23.1"} ; allows the slf4j2 API to work with log4j 2 org.apache.logging.log4j/log4j-layout-template-json {:mvn/version "2.23.1"} ; allows the custom json logging format - org.apache.poi/poi {:mvn/version "5.2.5"} ; Work with Office documents (e.g. Excel spreadsheets) -- newer version than one specified by Docjure + org.apache.poi/poi {:mvn/version "5.3.1"} ; Work with Office documents (e.g. Excel spreadsheets) -- newer version than one specified by Docjure org.apache.poi/poi-ooxml {:mvn/version "5.2.5" :exclusions [org.bouncycastle/bcpkix-jdk15on org.bouncycastle/bcprov-jdk15on]} ```
-
Cam Saul authored
* SQUASH! * Add another sanity checc * Another test fix attempt * Appease Kondo
-
Chris Truter authored
Co-authored-by:
Nick Fitzpatrick <nickfitz.582@gmail.com>
-
Alexander Polyankin authored
* Fix query_metadata not including native queries * Fix query_metadata not including native queries * fix tests
-
Alexander Polyankin authored
-
Nicolò Pretto authored
Co-authored-by:
Oisin Coveney <oisin@metabase.com> Co-authored-by:
Mahatthana (Kelvin) Nomsawadi <me@bboykelvin.dev> Co-authored-by:
bryan <bryan.maass@gmail.com> Co-authored-by:
Nicolò Pretto <info@npretto.com>
-
Cam Saul authored
* Parallel driver tests PoC * Set fail-fast to false for now * Try splitting up non-driver tests to see how broken tests are * Whoops fix plain BE tests * Ok nvm I'll test this in another branch * Fix fail-fast * Experiment with the improved Hawk split logic * Fix some broken/flaky tests * Experiment: try splitting MySQL 8 tests into FOUR jobs * Divide other Postgres and MySQL tests up and use num-partitions = 2 * Another test fix
* Flaky test fix * Try making more stuff fast * Make athena fast?? * Fix a few more things * Test fixes? * Fix configs * Fix Mongo job syntax * Fix busted test from #46942 * Fix Mongo config again * wait-for-port needs to specify shell I guess * More cleanup * await-port can't have a timeout-minutes I guess * Let's only parallelize MySQL for now. * Cleanup action * Cleanup wait-for-port action * Fix another flaky test * NOW driver tests will be FAST * Need to mark driver tests too * Fix wrong tag * Use Hawk 1.0.5 * Fix busted metabase.public-settings-test/landing-page-setting-test * Fix busted `metabase.api.database-test/get-database-test` etc. (hopefully) * Fix busted `metabase.sync.sync-metadata.fields-test/sync-fks-and-fields-test` for Oracle * Maybe this fixed `metabase.query-processor.middleware.permissions-test/e2e-ignore-user-supplied-perms-test` maybe not * Fix busted metabase.api.dashboard-test/dependent-metadata-test because endpoint had differemt sort order than test * Ok my test fix did not work * Fix metabase.sync.sync-metadata.fields-test/sync-fks-and-fields-test for Redshift * Better test name * More test fixes * Schema fix * PR feedback * Split off test partitioning into separate PR * Fix failing Oracle tests * Another round of test fixes, hopefully * Fix failing Redshift tests * Maybe the last round of test fixes * Fix Oracle * Fix stray line
-
- Oct 07, 2024
-
-
metamben authored
Implement better partitioning and sorting in window functions
-
Case Nelson authored
* feat: move auth providers behind ee token Fixes #48235 Introduces new premium feature `database-auth-providers`. Moves fetch-auth behind defenterprise - oss will always return an empty map. Add metabase.util.http to test outbound http requests. * Fix broken refs * Drop defmethod as adhoc overrides aren't desirable outside ee * Drop unessary require * Fix token and tests * Fix tests * Fix formatting * Fix var cast exception * Fix connection test * Move test to ee namespace * Move more tests behind enterprise * Fix checked-section hiding
-
- Oct 04, 2024
-
-
Ryan Laurie authored
* add update channels in product * support for changing release notes to show beta and nightly info * dont export setting * obey the linter and add tests * export setting * update e2e tests * clojure magic * clojure-foo * better localization * sorry mr linter * add more tests
-
Ngoc Khuat authored
-
- Oct 03, 2024
-
-
lbrdnk authored
* Use context for field id computation while hydrating dashbaord * Update docstring * Fix target * Update lookup * Fix param-target usage * Unskip e2e * Bind *param-id-context* in public dashboard compputation * Fix context update * Fix field-ids->param-field-values-ignoring-current-user * Add tests * Comments * cljfmt * Update field-ids->param-field-values-ignoring-current-user * Update src/metabase/models/params.clj Co-authored-by:
Braden Shepherdson <braden@metabase.com> * Update src/metabase/models/params.clj Co-authored-by:
Braden Shepherdson <braden@metabase.com> * Use atom instead of volatile! * Avoid dynamic function var * Update src/metabase/models/params.clj Co-authored-by:
Braden Shepherdson <braden@metabase.com> * Address remarks --------- Co-authored-by:
Braden Shepherdson <braden@metabase.com>
-
Chris Truter authored
-
- Oct 02, 2024
-
-
appleby authored
* Add BE test for exporting self-joined renamed columns * Add e2e test for exporting self-joined renamed columns See also - Not renamed fields in a same table join inherit the renamed name in exports #48046 - backport: Support both name & field ref-based column keys in viz settings on read and upgrade on write #48243
-
- Oct 01, 2024
-
-
Chris Truter authored
-
Chris Truter authored
-
Chris Truter authored
-
- Sep 30, 2024
-
-
John Swanson authored
* Do not cache all token check failures We want to cache token checks to avoid an issue where we repeatedly ask the store "hey, is this token valid?? is this token valid?? is this token valid??" for the same token. However, transient errors can also occur. For example, maybe a network issue causes the HTTP request to fail entirely. In this case, if we cache the result, the user needs to restart metabase (or wait 5 minutes until the cache is cleared) before they can attempt to validate their token again. This PR moves the cache logic deeper into the stack. We want to cache "successful" responses from the store API - cases where the store has told us categorically that the token is or is not valid. We don't want or need to cache other things that might happen. Maybe your token isn't the right length - we can recalculate that, it's ok. Maybe you get a 503 error from the Store - we should let you retry. Maybe your network is having issues and you can't contact the Store at all - again, we should let you retry. The one potential issue I see here is that if the store goes down, we'll massively increase the number of requests we send to the store, potentially making it harder to recover. If this is a concern, I can add a circuit breaker: if we repeatedly get errors back from the store, back off and stop making requests for a while. * Add a circuit breaker to store API requests In the pathological case where the store goes down for >5 minutes, the cache will expire and all instances everywhere will start repeatedly making requests for token validation at once. This might make recovering from an outage more difficult. This adds a circuit breaker around the API request. If the call repeatedly throws (5XX errors, socket timeouts, etc.) then we'll pause for 1 minute, during which time all calls to token validation will immediately fail without making any request to the API. After one minute, we'll allow one request through to the API. If it succeeds, we'll go back to normal operation. Otherwise, we'll wait another minute.
-
- Sep 27, 2024
-
-
Noah Moss authored
-
Braden Shepherdson authored
This was old logic to support certain drivers (eg. pre-JDBC Druid) and isn't required for most. It's perfectly sound to filter or even break out on a datetime column without bucketing. Adds a new `:temporal/requires-default-unit` driver feature, and enables it only for the legacy Druid driver. Fixes #47341
-