Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Oct 21, 2021
  2. Oct 20, 2021
  3. Oct 19, 2021
    • Pawit Pornkitprasan's avatar
      fix wrong drill down query when using nested query (#17942) · ff9b99d0
      Pawit Pornkitprasan authored
      * qp: fetch_source_query: store card-id for each query
      
      we want to be able to determine further in the pipeline
      whether a query came from a card (i.e. saved question)
      or not
      
      * qp: annotate: remove join-alias if source is a card
      
      If the source is a card, the front-end should be able to treat it
      similar to a database view so we should not expose the join
      aliases outside.
      
      If the card is on the right side of the join though, the alias
      should still exists and refers to the current-level join alias.
      Unverified
      ff9b99d0
    • Jeff Evans's avatar
      Support encryption of binary data (#17646) · 2e891f1d
      Jeff Evans authored
      Update encryption code to be able to operate on byte arrays directly (without dealing in base64 String representation), delegating existing methods to point to these new ones
      
      Adding test
      Unverified
      2e891f1d
    • Jeff Evans's avatar
      Prevent duplicate connection properties (#18359) · 22d23a98
      Jeff Evans authored
      Removing duplicate property declaration from presto-jdbc driver YAML
      
      Add test that executes against all drivers to confirm that no duplicate names come out of connection-properties
      
      Change the way the test runs to avoid needing to initialize test data namespace (to make googleanalytics happy)
      
      Unskip repro Cypress test
      Unverified
      22d23a98
    • Dennis Schridde's avatar
      Fix precondition of change set 97 (#16095) · 2d88ae48
      Dennis Schridde authored
      * Fix precondition of change set 97
      
      Without the `type` and with the space Liquibase is unable to parse this
      precondition.
      
      During `lein test` it outputs:
      ```
      [clojure-agent-send-off-pool-0] DEBUG liquibase.changelog - Running Changeset:migrations/000_migrations.yaml::97::senior
      [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Changeset migrations/000_migrations.yaml::97::senior
      [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Added 0.32.0
      [clojure-agent-send-off-pool-0] INFO  liquibase.changelog - Marking ChangeSet: migrations/000_migrations.yaml::97::senior ran despite precondition failure due to onFail='MARK_RAN':
                liquibase.yaml : DBMS Precondition failed: expected null, got h2
      
      [clojure-agent-send-off-pool-0] DEBUG liquibase.changelog - Skipping ChangeSet: migrations/000_migrations.yaml::97::senior
      [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Executing with the 'jdbc' executor
      [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - 1 row(s) affected
      ```
      
      After this change the output changes to:
      ```
      [clojure-agent-send-off-pool-0] DEBUG liquibase.changelog - Running Changeset:migrations/000_migrations.yaml::97::senior
      [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Changeset migrations/000_migrations.yaml::97::senior
      [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Added 0.32.0
      [clojure-agent-send-off-pool-0] INFO  liquibase.changelog - Marking ChangeSet: migrations/000_migrations.yaml::97::senior ran despite precondition failure due to onFail='MARK_RAN':
                liquibase.yaml : DBMS Precondition failed: expected mysql,mariadb, got h2
      
      [clojure-agent-send-off-pool-0] DEBUG liquibase.changelog - Skipping ChangeSet: migrations/000_migrations.yaml::97::senior
      [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Executing with the 'jdbc' executor
      [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - 1 row(s) affected
      ```
      
      For documentation of the syntax cf.
       https://docs.liquibase.com/concepts/advanced/preconditions.html
      
      
      
      * Extend migration linter to check dbms preconditions
      
      * Also validate the `type` field of the `dbms` precondition
      
      Co-authored-by: default avatardpsutton <dan@dpsutton.com>
      Unverified
      2d88ae48
    • Noah Moss's avatar
      Normalize field refs in viz settings, rework column ordering approach, and... · 423236a1
      Noah Moss authored
      Normalize field refs in viz settings, rework column ordering approach, and expand test coverage (#18490)
      
      Unverified
      423236a1
    • Howon Lee's avatar
      Coalesce audit question query runs to be 0 if no query executions (#18559) · 1feb52e7
      Howon Lee authored
      Null query runs show up at the top previously. This is because they are a null set. This is a thing that there is an affordance to happen, because you can just create cards without running them in the notebook builder. Coalesces the null to 0 to make things sort right.
      Unverified
      1feb52e7
    • Alexander Lesnenko's avatar
      Fix no databases on embedded new question page (#18556) · 8ad622da
      Alexander Lesnenko authored
      * Fix no databases on embedded new question page
      
      * Add an explanation just in case, replace lifecycle method
      Unverified
      8ad622da
    • Eric Dallo's avatar
    • Ariya Hidayat's avatar
      Circle CI: build a complete Uberjar on a release branch (#18550) · c8b22c46
      Ariya Hidayat authored
      Make sure that the built Uberjar contains translations etc.
      Also, build with "large" resource_class on Circle CI.
      Unverified
      c8b22c46
    • Howon Lee's avatar
      Audit cards data display problem (#18527) · 4bd2c5b1
      Howon Lee authored
      Underlying problem was that the cardinality of dates were getting limited by the default 1000 limit imposed by EE queries. Whack it for this specific instance. I think if we see it ever again we just remove the 1000 limit instead.
      Unverified
      4bd2c5b1
    • Dalton's avatar
      add more parameters unit tests (#18308) · bef09e29
      Dalton authored
      * add more parameters unit tests
      
      * add a few meta/Dashboard tests
      
      * remove some old, unnecessary comments
      
      * fix assertions in tests
      
      * fix getParameterTargetField tests
      
      * delete nonsensical test
      Unverified
      bef09e29
  4. Oct 18, 2021
    • Pawit Pornkitprasan's avatar
      Fix X-Ray table field shown as "null" in the title (#18066) · 2c69dc73
      Pawit Pornkitprasan authored
      Field were not normalized before being processed
      resulting in the result being `null`
      
      Fixes #15737
      Unverified
      2c69dc73
    • Pawit Pornkitprasan's avatar
      Allow caching of fonts and images (#18239) · 3db89e2a
      Pawit Pornkitprasan authored
      Webpack generate multiple resources with the name of
      "/[md4-hash].ext". We should allow those to be
      cached.
      Unverified
      3db89e2a
    • Ariya Hidayat's avatar
      Revert "Circle CI: build a complete Uberjar on a release branch (#18370)" (#18544) · 7ea4b567
      Ariya Hidayat authored
      This reverts commit adb2f715 as it broke Uberjar builds on CircleCI.
      Unverified
      7ea4b567
    • Cam Saul's avatar
    • Ariya Hidayat's avatar
      Circle CI: build a complete Uberjar on a release branch (#18370) · adb2f715
      Ariya Hidayat authored
      Make sure that the built Uberjar contains translations etc.
      Unverified
      adb2f715
    • Howon Lee's avatar
      Tools for fixing errors problems with postgres semantics of limits (blank... · 9e388032
      Howon Lee authored
      Tools for fixing errors problems with postgres semantics of limits (blank display of error table) (#18432)
      
      Previously sometimes error table blanks out in postgres because of limit semantics of postgres. Get it to not do that by whacking the limit and doing the display of latest error another way.
      Unverified
      9e388032
    • Howon Lee's avatar
      Mongo custexp fixes: group by and filters (#18403) · 5c819b89
      Howon Lee authored
      Previously mongo custexps were just columns: you couldn't have group bys with them and you couldn't have filters with them. This allows those, because those weren't really tested before. Also enabled nemanja's tests for them.
      Unverified
      5c819b89
    • Noah Moss's avatar
    • Dalton's avatar
      Use Dimensions to access Fields in Parameters code (#18431) · a48733a0
      Dalton authored
      
      * fix virtual field access when connecting parameters to dimensions
      
      * fix tests
      
      * lint fix
      
      * maybe keep things a little more backwards compatible
      
      * fix cy test
      
      * stop messing with field array ids
      
      * look for virtual fields on nested questions + tests
      
      * fix dashboard mapping test
      
      * fix cy test
      
      * add parameterToMBQLFilter tests
      
      * remove direct FIELD_REF usage
      
      * revert a few changes
      
      * Update frontend/src/metabase-lib/lib/Dimension.js
      
      Co-authored-by: default avatarGustavo Saiani <gustavo@poe.ma>
      
      Co-authored-by: default avatarGustavo Saiani <gustavo@poe.ma>
      Unverified
      a48733a0
    • Ariya Hidayat's avatar
      Custom expression editor: use lexical-based highlighting (#18482) · 11be3959
      Ariya Hidayat authored
      This reduces the memory pressure since the spans (for the
      contentEditable) will be flat (before, they construct a tree).
      Unverified
      11be3959
    • dpsutton's avatar
      Ensure we are paginating resultsets (#18477) · a33fa568
      dpsutton authored
      * Ensure we are paginating resultsets
      
      Made big tables in both pg and mysql
      
      pg:
      ```sql
      create table large_table
      (
          id         serial primary key,
          large_text text
      );
      
      insert into large_table (large_text)
      select repeat('Z', 4000)
      from generate_series(1, 500000)
      ```
      
      In mysql use the repl:
      ```clojure
      
        (jdbc/execute! (sql-jdbc.conn/db->pooled-connection-spec 5)
                       ["CREATE TABLE large_table (id int NOT NULL PRIMARY KEY AUTO_INCREMENT, foo text);"])
      
        (do
          (jdbc/insert-multi! (sql-jdbc.conn/db->pooled-connection-spec 5)
                              :large_table
                              (repeat 50000 {:foo (apply str (repeat 5000 "Z"))}))
          :done)
      
        (jdbc/execute! (sql-jdbc.conn/db->pooled-connection-spec 5)
                       ["ALTER TABLE large_table add column properties json default null"])
      
        (jdbc/execute! (sql-jdbc.conn/db->pooled-connection-spec 5)
                       ["update large_table set properties = '{\"data\":{\"cols\":null,\"native_form\":{\"query\":\"SELECT
                       `large_table`.`id` AS `id`, `large_table`.`foo` AS `foo` FROM `large_table` LIMIT
                       1\",\"params\":null},\"results_timezone\":\"UTC\",\"results_metadata\":{\"checksum\":\"0MnSKb8145UERWn18F5Uiw==\",\"columns\":[{\"semantic_type\":\"type/PK\",\"coercion_strategy\":null,\"name\":\"id\",\"field_ref\":[\"field\",200,null],\"effective_type\":\"type/Integer\",\"id\":200,\"display_name\":\"ID\",\"fingerprint\":null,\"base_type\":\"type/Integer\"},{\"semantic_type\":null,\"coercion_strategy\":null,\"name\":\"foo\",\"field_ref\":[\"field\",201,null],\"effective_type\":\"type/Text\",\"id\":201,\"display_name\":\"Foo\",\"fingerprint\":{\"global\":{\"distinct-count\":1,\"nil%\":0.0},\"type\":{\"type/Text\":{\"percent-json\":0.0,\"percent-url\":0.0,\"percent-email\":0.0,\"percent-state\":0.0,\"average-length\":500.0}}},\"base_type\":\"type/Text\"}]},\"insights\":null,\"count\":1}}'"])
      
      ```
      
      and then from the terminal client repeat this until we have 800,000 rows:
      ```sql
      insert into large_table (foo, properties) select foo, properties from large_table;
      ```
      
      Then can exercise from code with the following:
      
      ```clojure
      (-> (qp/process-query {:database 5 ; use appropriate db and tables here
                              :query {:source-table 42
                                      ;; :limit 1000000
                                      },
                              :type :query}
                              ;; don't retain any rows, purely just counting
                              ;; so resultset is what retains too many rows
                             {:rff (fn [metadata]
                                     (let [c (volatile! 0)]
                                       (fn count-rff
                                         ([]
                                          {:data metadata})
                                         ([result]
                                          (assoc-in result [:data :count] @c))
                                         ([result _row]
                                          (vswap! c inc)
                                          result))))
                              })
           :data :count)
      ```
      
      PG was far easier to blow up. Mysql took quite a bit of data.
      
      Then we just set a fetch size on the result set so that we (hopefully)
      only have than many rows in memory in the resultset at once. The
      streaming will write to the download stream as it goes.
      
      PG has one other complication in that the fetch size can only be honored
      if autoCommit is false. The reasoning seems to be that each statement is
      in a transaction and commits and to commit it has to close resultsets
      and therefore it has to realize the entire resultset otherwise you would
      only get the initial page if any.
      
      * Set default fetch size to 500
      
      ;; Long queries on gcloud pg
      ;; limit 10,000
      ;; fetch size | t1   | t2   | t3
      ;; -------------------------------
      ;; 100        | 6030 | 8804 | 5986
      ;; 500        | 1537 | 1535 | 1494
      ;; 1000       | 1714 | 1802 | 1611
      ;; 3000       | 1644 | 1595 | 2044
      
      ;; limit 30,000
      ;; fetch size | t1    | t2    | t3
      ;; -------------------------------
      ;; 100        | 17341 | 15991 | 16061
      ;; 500        | 4112  | 4182  | 4851
      ;; 1000       | 5075  | 4546  | 4284
      ;; 3000       | 5405  | 5055  | 4745
      
      * Only set fetch size if not default (0)
      
      Details of `:additional-options "defaultRowFetchSize=3000"` can set a
      default fetch size and we can easily honor that. This allows overriding
      per db without much work on our part.
      
      * Remove redshift custom fetch size code
      
      This removes the automatic insertion of a defaultRowFetchSize=5000 on
      redshift dbs. Now we always set this to 500 in the sql-jdbc statement
      and prepared statement fields. And we also allow custom ones to persist
      over our default of 500.
      
      One additional benefit of removing this is that it always included the
      option even if a user added ?defaultRowFetchSize=300 themselves so this
      should actually give more control to our users.
      
      Profiling quickly on selecting 79,000 rows from redshift, there
      essentially no difference between a fetch size of 500 (the default) and
      5000 (the old redshift default); both were 12442 ms or so.
      
      * unused require of settings in redshift tests
      
      * Appease the linter
      
      * Unnecessary redshift connection details tests
      Unverified
      a33fa568
    • Ariya Hidayat's avatar
      Unskip the repro for 14776 (#18525) · 76a35846
      Ariya Hidayat authored
      The issue has been fixed in the past with PR 15839.
      Unverified
      76a35846
    • Alexander Lesnenko's avatar
      Fix missing card descriptions (#18501) · b1a74fff
      Alexander Lesnenko authored
      * fix missing description info icon on dashboard cards
      
      * update test
      Unverified
      b1a74fff
    • Alexander Polyankin's avatar
    • Anton Kulyk's avatar
      Friendly revision history messages (#17858) · 1018c5fc
      Anton Kulyk authored
      * Move revision helpers to own directory
      
      * Add simple utility to format revision messages
      
      * Add messages for basic dashboard cards changes
      
      * Handle null values for dashboard card actions
      
      * Add basic card series revision message support
      
      * Batch multiple changes in a single revision
      
      * Add `isValidRevision` helper
      
      * Return title and description instead a single string
      
      * Filter out unknown fields in revisions
      
      * Fix viz settings revision descriptions
      
      * Add helpers to revisions unit tests
      
      * Use new revision messages util
      
      * Filter out invalid or unknown revisions
      
      * Capitalize revision descriptions
      
      * Wrap new item name with double-quotes
      
      * Move revisions unit tests to source code directory
      
      * Add basic HistoryModal tests
      
      * Add getChangedFields helper
      
      * Revert getRevisionMessage return type back to str
      
      * Extend isValidRevision check
      
      * Fix getRevisionEventsForTimeline work with updated helper
      
      * Expsoe revision utils
      
      * Use new messages in HistoryModal
      
      * Remove getRevisionDescription function
      
      * Handle cases when revision's after / before state is null
      
      * Simplify getRevisionMessage
      
      * Use "description" instead of "message"
      
      * Fix dataset_query revision not parsed correctly
      
      * Filter out unknown field change types
      
      * Support collection_id change event
      
      * Return array of changes instead of batching in a single message
      
      * Return JSX from getRevisionEventsForTimeline
      
      * Fix UI
      
      * Remove console.log
      
      * Use "rearranged the cards" message
      
      * Fix e2e test using old revision messages
      
      * Prefer 'after' state to get changed fields
      
      * Fix timeline revision event
      
      * Fix translations
      
      * Add `key` prop to `jt`
      
      * Merge revision files
      
      * Add an option not to lowercase the capitalize str
      
      * Use updated capitalize function
      
      * Fix test string
      
      * Display question's "display" change messages
      
      * [ci nocache]
      
      * Fix tests
      
      * [ci nocache]
      Unverified
      1018c5fc
    • Anton Kulyk's avatar
      Add custom react-testing-library render function (#18353) · 01f442a8
      Anton Kulyk authored
      * Add custom @testing-library/react render wrapper
      
      * Migrate unit tests to custom render function
      
      * Rename helper to `renderWithProviders`
      
      * Remove irrelevant eslint rule disable
      Unverified
      01f442a8
  5. Oct 17, 2021
Loading