Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Oct 18, 2024
  2. Oct 17, 2024
  3. Oct 16, 2024
  4. Oct 15, 2024
    • bryan's avatar
      Adds missing schemas + data for metrics to snowplow stats ping (#48476) · e5c181bd
      bryan authored
      * adds data + schema for metrics stats ping
      
      * remove comment
      
      * annotate todos
      
      * fill in the rest of the metrics values
      
      - and add defaults
      
      * fix some definitions + use a single timestamp
      
      * shuffle stuff around to appease the namespace linter
      
      - we aren't reaching into the api API, so the linter is more of a
        formality here.
      
      * update docstring
      
      * fix jsonschema, maybe
      
      * review responses
      
      - revert changes to 1-0-0
      - add metrics section to new file: 1-0-1
      - bump ::instance_stats to "1-0-1"
      - add tags into 1-0-1
      - make the code return tags (empty for now until we know what to tag things.)
      
      - also fix test that broke from shuffling settings around
      
      * remove a footgun
      
      * add tags to metrics,
      
      add grouped_metrics to jsonschema 1-0-1
      
      format 1-0-1
      
      * indent
      
      * cljfmt
      
      * version should match filename
      
      * update instance stats 1-0-1 schema
      
      * require `tags` in grouped_metric + snowcat comment
      
      * fix formatting noise
      
      * update schema to make it valid
      
      * remove grouped_metrics from instance_stats schema
      
      * cljfmt appeasement
      
      * unbin cache_num_queries_cached value
      
      - alphabetize metrics generation
      
      * we can now guarantee metric values will be ints
      
      * jsonschema for instance_uuid, settings, and grouped_metrics
      
      * add analytics_uuid and make it required
      
      * lint + alphabetize instance stats json schema
      
      * update setting key type
      
      - add maxLength to some strings
      
      * lint jsonschema
      
      - Add description to setting.items.tags
      - add maxLength, and {min,max}imum to setting.items.value
      
      * Bump instance stats to 2-0-0
      
      - remove analytics_uuid string length == 36 check
      - adds assertion to ensure required fields are set (and it passes)
      - adds info for embedding settings
      
      * adds a grouped-metric to stats ping
      
      - adds length info to the schema to pass jsonschema linting
      
      * cljfmt
      Unverified
      e5c181bd
    • adam-james's avatar
      Incremental Pivot Processing for Exports (#46995) · 8d52a03f
      adam-james authored
      
      * Incremental Pivot Processing for Exports
      
      WIP
      
      Fixes pivot exports for CSV and xlsx.
      
      The CSV export should use less memory by incrementally building up the data structure and aggregating necessary row
      data right away, so the memory overhead becomes only as large as the total pivot result.
      
      In cases where the pivot rows/cols do combine into many many columns and rows, this can still be a large set of data,
      but it should behave much better now in most cases.
      
      The Excel export is a little more straightforward: create the export rows in the same fashion, streaming one row at a
      time, and just post-process the sheet to add the pivot table in one shot at the end.
      
      * WIP adding row totals.
      
      * aggregate totals as rows are added
      
      Row, column, section, and grand totals are all aggregated as each row is added.
      This means the final step of building pivot output becomes just an exercise of lookups/arrangement, no further
      aggregation is needed.
      
      * CSV pivot works per-row, export respects formatting
      
      This is a big step forward; we don't need to hold the entire dataset in memory, we instead aggregate a row's data into
      the pivot datastructure, which only holds onto:
      
      - unique values for each pivot-row in a sorted set
      - unique values for each pivot-col in a sorted set
      - grand total for each measure N values, where N is number of measures, ususally 1 or 2
      - row totals for each combination of each pivot-row * N measures
      - col totals for each combination of each pivot-col * N measures
      - totals for each 'section', determined by unique values of first pivot-row * N measures
      - values for each measure in every 'cell'; Row Combos * Col Combos * N Measures
      
      So, there can still be a decent amount of data to store; but it will never hold onto all of the 'raw rows' from the
      dataset.
      
      We can never completely guarantee that Row Combos * Col Combos * N Measures remains small, but two things let us move
      forward anyway:
      
      - there's now visible feedback in the app that the download is running (or if it's failed)
      - Pivot table utility diminishes rapidly with huge output anyway; users still need to curate/set up their data
      - effectively to improve the table's utility, so we can assume that a slow-to-download pivot table is also slow to
      - use/less effective, and will likely be something the user doesn't want (as often).
      
      * some test fixes
      
      * now, if we export 'raw pivot rows', they don't show pivot-grouping
      
      and they also don't include the 'extra' rows for totals/subtotals/grand totals (any row with pivot-grouping > 0).
      
      This means that now the non-pivot version of a pivot table export will match what a user sees if they change the viz
      to a regular table.
      
      * remove old test
      
      * re-incorporate some changes from master
      
      * fix csv for non-pivots due to oversight in my changes
      
      This is just a temporary change, I think I should clean up this bit of the code a little, I can probably make it a
      little more readable and use some cleaner logic regarding if the rows are 'raw pivot rows' or not.
      
      * start moving format_rows to POST bod, add pivot_results too
      
      There's still wiring work to do, but this starts to add format_rows and pivot_results to POST body for the various API
      endpoints. Also modify tests to improve coverage/consistency across downloads and alerts/subscriptions.
      
      The tests will not pass on this commit, but fixes will be incoming
      
      * native pivot tables in xlsx
      
      * add precondition to pass migration linter
      
      * try to get migrations fixed
      
      * pasing pivot-results through api and attachments
      
      * fix tests for format_rows in BODY vs query param
      
      * tests!
      
      * might have the tests all fixed now
      
      * the pivoted export now respects col/row totals settings
      
      * add test coverage for public questions and dashboards
      
      * col and row totals work as expected
      
      * build-pivot refactor for clarity
      
      * docstring change + tiny refactor in helper fn
      
      * see if dashcard download works with format_rows
      
      * csv pivot handles nil values
      
      * pass format_rows and pivot_results in :params not :body
      
      * fix some other tests
      
      * pivot-grouping col filtered out of xlsx
      
      * pivot-grouping-col removed for all rows
      
      * configurable pivot exports and attachments (#47880)
      
      * exports fe
      
      * specs
      
      * ui
      
      * specs
      
      * format/unformatted now works for xlsx
      
      * format test changes for xlsx formatting
      
      * embedding endpoints accept pivot_results
      
      * cljfmt and eslint fix
      
      * empty
      
      * embedding test should have formatting defaulted to true
      
      * embed test fixes
      
      * Use `Chip` for export settings widget
      
      * downloads e2e test fix
      
      * fix public download limit test
      
      * public card download defaults
      
      * fix public download defaults in some tests
      
      * Fix visual test
      
      ---------
      
      Co-authored-by: default avatarAleksandr Lesnenko <alxnddr@users.noreply.github.com>
      Co-authored-by: default avatarNoah Moss <32746338+noahmoss@users.noreply.github.com>
      Co-authored-by: default avatarAnton Kulyk <kuliks.anton@gmail.com>
      Unverified
      8d52a03f
    • Aleksandr Lesnenko's avatar
      fix iframe dashcards crash subscriptions (#48589) · ba06b99e
      Aleksandr Lesnenko authored
      
      * fix iframe dashcards crash subscriptions
      
      * add a test to ensure iframes are filtered out of subscriptions
      
      ---------
      
      Co-authored-by: default avatarAdam James <adam.vermeer2@gmail.com>
      Co-authored-by: default avataradam-james <21064735+adam-james-v@users.noreply.github.com>
      Unverified
      ba06b99e
    • Ngoc Khuat's avatar
      Unverified
      1a2a9c81
  5. Oct 14, 2024
  6. Oct 11, 2024
  7. Oct 10, 2024
    • appleby's avatar
      Relax concat args to allow any ExpressionArg (#48506) · c16d05a0
      appleby authored
      * Relax the arg types to ExpressionArg for concat expressions in the legacy schema
      
      Relax the arg types to ExpressionArg for concat since many DBs allow to concatenate non-string types. This also aligns
      with the corresponding MLv2 schema and with the reference docs we publish.
      
      Fixes #39439
      
      * Add nested concat schema tests
      
      * Add nested-concat query-processor tests
      Unverified
      c16d05a0
    • dpsutton's avatar
      Always startup prometheus metrics (#48547) · 2410012a
      dpsutton authored
      * Always startup prometheus metrics
      
      previously only started up when a port was provided
      
      ```
      MB_PROMETHEUS_SERVER_PORT=9191 java -jar metabase.jar
      ```
      
      But these counters are useful to be included in anonymous stats. So
      let's start up the collectors and then we can get their values like:
      
      ```clojure
      prometheus=> (dotimes [_ 500] (inc :metabase-email/messages))
      nil
      ;; prometheus/value is iapetos.core/value here
      prometheus=> (-> system :registry :metabase-email/messages prometheus/value)
      501.0
      ```
      
      * rename `metabase.analytics.prometheus/inc` to `inc!`
      
      it side effects a value and now no longer shadows `clojure.core/inc` so
      we're all happy
      Unverified
      2410012a
    • Ngoc Khuat's avatar
      [Notification] System event notification (#47857) · fc43d3cd
      Ngoc Khuat authored
      * [Notification] Notification and subscription (#47707)
      
      * [Notification] Notification and subscription (#47707)
      
      * [Notification] Handlers + recipients (#47759)
      
      * [Notification] Channel template table and model (#47782)
      
      * [Notification] Render system event emails (#47859)
      
      * [Notification] Strict type for channel template and notification recipient (#47910)
      
      * [Notification] Event hydration (#47953)
      
      * [Notification] Send asynchronously (#48200)
      Unverified
      fc43d3cd
  8. Oct 09, 2024
  9. Oct 08, 2024
    • dpsutton's avatar
      Ensure the temp directory exists (#48488) · 5fd75e0d
      dpsutton authored
      https://github.com/metabase/metabase/issues/41919#issuecomment-2400542908
      
      This issue goes away in 5.3.1 of apache poi. Seems like they have a
      yearly release cadence so we can ensure this exists for the time being
      and then remove this when we can bump to 5.3.1
      
      ```diff
      index b763a1ffdf..c87992c935 100644
      --- a/deps.edn
      +++ b/deps.edn
      @@ -122,7 +122,7 @@
         {:mvn/version "2.23.1"}             ; allows the slf4j2 API to work with log4j 2
         org.apache.logging.log4j/log4j-layout-template-json
         {:mvn/version "2.23.1"}             ; allows the custom json logging format
      -  org.apache.poi/poi                        {:mvn/version "5.2.5"}              ; Work with Office documents (e.g. Excel spreadsheets) -- newer version than one specified by Docjure
      +  org.apache.poi/poi                        {:mvn/version "5.3.1"}              ; Work with Office documents (e.g. Excel spreadsheets) -- newer version than one specified by Docjure
         org.apache.poi/poi-ooxml                  {:mvn/version "5.2.5"
                                                    :exclusions  [org.bouncycastle/bcpkix-jdk15on
                                                                  org.bouncycastle/bcprov-jdk15on]}
      ```
      Unverified
      5fd75e0d
    • Cam Saul's avatar
      :race_car::rocket: :race_car::rocket: :race_car::rocket: Test partitioning for MySQL: shave ~5 minutes off of CI runs :race_car::rocket::race_car::rocket: :race_car::rocket: (#48422) · b8172829
      Cam Saul authored
      * SQUASH!
      
      * Add another sanity checc
      
      * Another test fix attempt
      
      * Appease Kondo
      Unverified
      b8172829
    • Chris Truter's avatar
    • Alexander Polyankin's avatar
      Fix query_metadata not including metadata for native queries (#48459) · 3a0a9690
      Alexander Polyankin authored
      * Fix query_metadata not including native queries
      
      * Fix query_metadata not including native queries
      
      * fix tests
      Unverified
      3a0a9690
    • Alexander Polyankin's avatar
      Native query drill (#48232) · 4aa67506
      Alexander Polyankin authored
      Unverified
      4aa67506
    • Nicolò Pretto's avatar
      Unverified
      537ba417
    • Cam Saul's avatar
      :race_car::rocket::race_car::rocket: :race_car::rocket: SHAVE 7 MINUTES OFF OF NON-CORE DRIVER TEST RUNS IN CI :race_car::rocket::race_car::rocket: :race_car::rocket: (#47681) · cd4d7646
      Cam Saul authored
      * Parallel driver tests PoC
      
      * Set fail-fast to false for now
      
      * Try splitting up non-driver tests to see how broken tests are
      
      * Whoops fix plain BE tests
      
      * Ok nvm I'll test this in another branch
      
      * Fix fail-fast
      
      * Experiment with the improved Hawk split logic
      
      * Fix some broken/flaky tests
      
      * Experiment: try splitting MySQL 8 tests into FOUR jobs
      
      * Divide other Postgres and MySQL tests up and use num-partitions = 2
      
      * Another test fix :wrench:
      
      * Flaky test fix :wrench:
      
      * Try making more stuff fast
      
      * Make athena fast??
      
      * Fix a few more things
      
      * Test fixes? :wrench:
      
      * Fix configs
      
      * Fix Mongo job syntax
      
      * Fix busted test from #46942
      
      * Fix Mongo config again
      
      * wait-for-port needs to specify shell I guess
      
      * More cleanup
      
      * await-port can't have a timeout-minutes I guess
      
      * Let's only parallelize MySQL for now.
      
      * Cleanup action
      
      * Cleanup wait-for-port action
      
      * Fix another flaky test
      
      * NOW driver tests will be FAST
      
      * Need to mark driver tests too
      
      * Fix wrong tag
      
      * Use Hawk 1.0.5
      
      * Fix busted metabase.public-settings-test/landing-page-setting-test
      
      * Fix busted `metabase.api.database-test/get-database-test` etc. (hopefully)
      
      * Fix busted `metabase.sync.sync-metadata.fields-test/sync-fks-and-fields-test` for Oracle
      
      * Maybe this fixed `metabase.query-processor.middleware.permissions-test/e2e-ignore-user-supplied-perms-test` maybe not
      
      * Fix busted metabase.api.dashboard-test/dependent-metadata-test because endpoint had differemt sort order than test
      
      * Ok my test fix did not work
      
      * Fix metabase.sync.sync-metadata.fields-test/sync-fks-and-fields-test for Redshift
      
      * Better test name
      
      * More test fixes :wrench:
      
      * Schema fix
      
      * PR feedback
      
      * Split off test partitioning into separate PR
      
      * Fix failing Oracle tests
      
      * Another round of test fixes, hopefully
      
      * Fix failing Redshift tests
      
      * Maybe the last round of test fixes
      
      * Fix Oracle
      
      * Fix stray line
      Unverified
      cd4d7646
  10. Oct 07, 2024
    • metamben's avatar
      Implement better partitioning and sorting in window functions (#48028) · de50ba71
      metamben authored
      Implement better partitioning and sorting in window functions
      Unverified
      de50ba71
    • Case Nelson's avatar
      feat: move auth providers behind ee token (#48245) · eaabfa79
      Case Nelson authored
      * feat: move auth providers behind ee token
      
      Fixes #48235
      
      Introduces new premium feature `database-auth-providers`.
      Moves fetch-auth behind defenterprise - oss will always return an empty
      map.
      Add metabase.util.http to test outbound http requests.
      
      * Fix broken refs
      
      * Drop defmethod as adhoc overrides aren't desirable outside ee
      
      * Drop unessary require
      
      * Fix token and tests
      
      * Fix tests
      
      * Fix formatting
      
      * Fix var cast exception
      
      * Fix connection test
      
      * Move test to ee namespace
      
      * Move more tests behind enterprise
      
      * Fix checked-section hiding
      Unverified
      eaabfa79
  11. Oct 04, 2024
Loading