Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Oct 19, 2021
  2. Oct 18, 2021
    • Pawit Pornkitprasan's avatar
      Fix X-Ray table field shown as "null" in the title (#18066) · 2c69dc73
      Pawit Pornkitprasan authored
      Field were not normalized before being processed
      resulting in the result being `null`
      
      Fixes #15737
      Unverified
      2c69dc73
    • Pawit Pornkitprasan's avatar
      Allow caching of fonts and images (#18239) · 3db89e2a
      Pawit Pornkitprasan authored
      Webpack generate multiple resources with the name of
      "/[md4-hash].ext". We should allow those to be
      cached.
      Unverified
      3db89e2a
    • Cam Saul's avatar
    • Noah Moss's avatar
    • dpsutton's avatar
      Ensure we are paginating resultsets (#18477) · a33fa568
      dpsutton authored
      * Ensure we are paginating resultsets
      
      Made big tables in both pg and mysql
      
      pg:
      ```sql
      create table large_table
      (
          id         serial primary key,
          large_text text
      );
      
      insert into large_table (large_text)
      select repeat('Z', 4000)
      from generate_series(1, 500000)
      ```
      
      In mysql use the repl:
      ```clojure
      
        (jdbc/execute! (sql-jdbc.conn/db->pooled-connection-spec 5)
                       ["CREATE TABLE large_table (id int NOT NULL PRIMARY KEY AUTO_INCREMENT, foo text);"])
      
        (do
          (jdbc/insert-multi! (sql-jdbc.conn/db->pooled-connection-spec 5)
                              :large_table
                              (repeat 50000 {:foo (apply str (repeat 5000 "Z"))}))
          :done)
      
        (jdbc/execute! (sql-jdbc.conn/db->pooled-connection-spec 5)
                       ["ALTER TABLE large_table add column properties json default null"])
      
        (jdbc/execute! (sql-jdbc.conn/db->pooled-connection-spec 5)
                       ["update large_table set properties = '{\"data\":{\"cols\":null,\"native_form\":{\"query\":\"SELECT
                       `large_table`.`id` AS `id`, `large_table`.`foo` AS `foo` FROM `large_table` LIMIT
                       1\",\"params\":null},\"results_timezone\":\"UTC\",\"results_metadata\":{\"checksum\":\"0MnSKb8145UERWn18F5Uiw==\",\"columns\":[{\"semantic_type\":\"type/PK\",\"coercion_strategy\":null,\"name\":\"id\",\"field_ref\":[\"field\",200,null],\"effective_type\":\"type/Integer\",\"id\":200,\"display_name\":\"ID\",\"fingerprint\":null,\"base_type\":\"type/Integer\"},{\"semantic_type\":null,\"coercion_strategy\":null,\"name\":\"foo\",\"field_ref\":[\"field\",201,null],\"effective_type\":\"type/Text\",\"id\":201,\"display_name\":\"Foo\",\"fingerprint\":{\"global\":{\"distinct-count\":1,\"nil%\":0.0},\"type\":{\"type/Text\":{\"percent-json\":0.0,\"percent-url\":0.0,\"percent-email\":0.0,\"percent-state\":0.0,\"average-length\":500.0}}},\"base_type\":\"type/Text\"}]},\"insights\":null,\"count\":1}}'"])
      
      ```
      
      and then from the terminal client repeat this until we have 800,000 rows:
      ```sql
      insert into large_table (foo, properties) select foo, properties from large_table;
      ```
      
      Then can exercise from code with the following:
      
      ```clojure
      (-> (qp/process-query {:database 5 ; use appropriate db and tables here
                              :query {:source-table 42
                                      ;; :limit 1000000
                                      },
                              :type :query}
                              ;; don't retain any rows, purely just counting
                              ;; so resultset is what retains too many rows
                             {:rff (fn [metadata]
                                     (let [c (volatile! 0)]
                                       (fn count-rff
                                         ([]
                                          {:data metadata})
                                         ([result]
                                          (assoc-in result [:data :count] @c))
                                         ([result _row]
                                          (vswap! c inc)
                                          result))))
                              })
           :data :count)
      ```
      
      PG was far easier to blow up. Mysql took quite a bit of data.
      
      Then we just set a fetch size on the result set so that we (hopefully)
      only have than many rows in memory in the resultset at once. The
      streaming will write to the download stream as it goes.
      
      PG has one other complication in that the fetch size can only be honored
      if autoCommit is false. The reasoning seems to be that each statement is
      in a transaction and commits and to commit it has to close resultsets
      and therefore it has to realize the entire resultset otherwise you would
      only get the initial page if any.
      
      * Set default fetch size to 500
      
      ;; Long queries on gcloud pg
      ;; limit 10,000
      ;; fetch size | t1   | t2   | t3
      ;; -------------------------------
      ;; 100        | 6030 | 8804 | 5986
      ;; 500        | 1537 | 1535 | 1494
      ;; 1000       | 1714 | 1802 | 1611
      ;; 3000       | 1644 | 1595 | 2044
      
      ;; limit 30,000
      ;; fetch size | t1    | t2    | t3
      ;; -------------------------------
      ;; 100        | 17341 | 15991 | 16061
      ;; 500        | 4112  | 4182  | 4851
      ;; 1000       | 5075  | 4546  | 4284
      ;; 3000       | 5405  | 5055  | 4745
      
      * Only set fetch size if not default (0)
      
      Details of `:additional-options "defaultRowFetchSize=3000"` can set a
      default fetch size and we can easily honor that. This allows overriding
      per db without much work on our part.
      
      * Remove redshift custom fetch size code
      
      This removes the automatic insertion of a defaultRowFetchSize=5000 on
      redshift dbs. Now we always set this to 500 in the sql-jdbc statement
      and prepared statement fields. And we also allow custom ones to persist
      over our default of 500.
      
      One additional benefit of removing this is that it always included the
      option even if a user added ?defaultRowFetchSize=300 themselves so this
      should actually give more control to our users.
      
      Profiling quickly on selecting 79,000 rows from redshift, there
      essentially no difference between a fetch size of 500 (the default) and
      5000 (the old redshift default); both were 12442 ms or so.
      
      * unused require of settings in redshift tests
      
      * Appease the linter
      
      * Unnecessary redshift connection details tests
      Unverified
      a33fa568
  3. Oct 15, 2021
    • Jeff Evans's avatar
      Fix flaky `rotate-encryption-key!-test` (#18371) · 82b3147c
      Jeff Evans authored
      Fix flaky `rotate-encryption-key!-test`
      
      Bind a temporary site locale when running the test, and disable the settings cache
      
      Now, a full explanation of what was making the test flaky, for posterity:
      
      When this test runs, it first tries to create a blank app DB.  This is either postgres, h2, or mysql, depending on the driver under test, which is slightly confusing, but anyway...
      
      In order to populate all the app DB tables (since the "real" app DB has already been initialized at this point to even run the test), it first rebinds the appropriate dynamic vars for the db type, spec, connection, etc. and then loads a test fixture H2 file into the app DB.  This h2 test fixture file, evidently, contains all the current app DB tables, etc.
      
      When initializing the app DB from H2 (in `metabase.cmd.copy/copy!`), the code needs to internationalize certain loading message strings.  And in order to internationalize messages, it has to look up the current site locale, which is itself an application setting, which of course comes from the setting entity table.
      
      For the "regular" blank app DB scenario (i.e. when Metabase first starts with a blank app DB), there is some code in the `metabase.util.i18n.impl` namespace to handle this chicken and egg problem (basically, defaulting to "en" if it can't be loaded from the app DB because there is no app DB yet).  But after this tricky process finishes, that temporary, hacked lookup is replaced by the real one (since now, the app DB exists and the locale can just be loaded normally).
      
      For the purpose of our test, that is the problem.  That state (which swaps out the `site-locale-from-setting` fn) has already run (remember, we have already initialized the app DB to even run this test in the first place, and now we are swapping in a temporary one).  So the call to load the site locate from the settings table (which is needed to print the loading message to initialize this temp app DB) fails, and hence the test fails.
      
      Now, the reason this test was flaky, instead of just always failing, is because of the settings cache.  If the site locale had been loaded anywhere within some short time frame before this test needs to load it, to print init messages, then it succeeds!  But that doesn't always end up happening, of course, since it's effectively a race condition (setting cache expiration versus test execution timing).
      Unverified
      82b3147c
  4. Oct 14, 2021
  5. Oct 13, 2021
  6. Oct 11, 2021
    • dpsutton's avatar
      Two column table (#18392) · 57506cc0
      dpsutton authored
      * Don't render bar charts are sparklines
      
      https://github.com/metabase/metabase/issues/18352
      
      A simple select `SELECT 'a', 1 UNION ALL SELECT 'b', 2` would trigger as
      a sparkline and blow up when comparing. There are two issues:
      - don't render tables as a sparkline even if it only has two columns
      - don't compare things that aren't comparable.
      
      this solves the first issue, ensuring that tables aren't matched as
      sparklines
      
      * Sparkline should only compare comparable values
      
      Check that the columns are temporal or numeric before comparing them. It
      is only to optionally reverse the order of the results which seems
      questionable in itself, but we can at least check that they are
      comparable before attempting to
      
      * Ensure tests render as sparklines
      
      default was a :table and that adds csv attachments which throw off our
      tests
      
      * unskip repro
      
      * remove unnecessary extra line
      Unverified
      57506cc0
    • dpsutton's avatar
      Correctly format 0 decimal places (#18397) · 6dd76f4a
      dpsutton authored
      Input:    12345.65432
      Expected: 12346
      Actual:   12346.
      
      We were unconditionally adding "." (repeat decimals "0") to the base
      formatting string. Obviously when the decimals are 0, we are appending
      "." to the base format string
      Unverified
      6dd76f4a
    • Noah Moss's avatar
  7. Oct 08, 2021
  8. Oct 06, 2021
  9. Oct 05, 2021
  10. Oct 04, 2021
  11. Oct 01, 2021
  12. Sep 30, 2021
    • Jeff Evans's avatar
      Fix errors from disallowed characters in BigQuery custom expression names (#18055) · cb05523b
      Jeff Evans authored
      Add `escape-alias` multimethod to `sql.qp` to handle translating an alias (whether for a field or expression) into something supported by the driver in quesion
      
      Add default impl that is essentially identity
      
      Marking `sql.qp/field->alias` as deprecated, and changing its default implementation to invoke the new `sql.qp/escape-alias` on the field's `:name` (with the intention of removing it in a future release, since individual driver won't need to override this, so much as they'll need to override `escape-alias` itself).
      
      Override `sql.qp/escape-alias` method for both BigQuery drivers to run through the same `->valid-field-identifier` function already defined for this purpose
      
      Add test for a custom expression with weird characters
      Unverified
      cb05523b
    • Alexander Lesnenko's avatar
      fix filters alignment (#18139) · c56df9ce
      Alexander Lesnenko authored
      Unverified
      c56df9ce
    • Howon Lee's avatar
      Do not emit view logs when there's no card id for query viewlogs (fixes #18136) (#18138) · 9eced495
      Howon Lee authored
      There is a problem when there are no card ids for a userland query (so not all userland queries are non-adhoc queries: they can be adhoc), where viewlog needs a card id so it chokes. Just adds a check so it doesn't even try to emit a viewlog, since the auditing is for non-adhoc queries only.
      Unverified
      9eced495
  13. Sep 29, 2021
  14. Sep 28, 2021
    • dpsutton's avatar
      Unverify cards when they are edited (#17997) · a750e4a5
      dpsutton authored
      from maz in slack:
      Should not remove verification:
       - Moving to another collection
       - Pinning/unpinning
       - Adding/removing an alert on the question
       - Turning sharing on the question on/off
      Should remove verification:
       - Editing the query
       - Editing the visualization or viz settings
       - Editing the title or description
      
      This really just identifies when not to remove verification, otherwise
      does remove verification. Alerts are not related to questions through
      the api so no worries there. Everything else is just excluding
      attributes from the diff calculation.
      
      In tests, went perhaps a bit haywire with the macrolet, allowing to
      define macros inline rather than farther away from the call site. Used
      twice:
      - once to optionally add in moderation review into the with-temp. This
      macro is handy to ensure we always have a fresh state of card and don't
      have leftover changes or verifications. Has to be a macro since i want
      to dynamically change whether there are verifications in the with-temp*
      vector binding
      - a way to more easily reduce repetition when asserting something is
      verified, do an action, and assert it is still verified
      Unverified
      a750e4a5
    • Howon Lee's avatar
      Audit cache controls 2 (#18046) · d7898af8
      Howon Lee authored
      Cache controls all landed but is lacking in the audit capacity wanted for in the notion doc. This PR adds that audit capability and by the by changes the ViewLog model in order to be able to deliver on the question of whether ViewLogs were cache hits or not.
      Unverified
      d7898af8
  15. Sep 27, 2021
    • Pawit Pornkitprasan's avatar
      Fix {0} being shown when locale is "pt" (#17875) · 35651360
      Pawit Pornkitprasan authored
      This is a combination of 2 issues:
       1) pt got renamed to pt_BR in x.39.x but the old
          "pt" value may still be stored in the database
       2) when making a release, an unclean build
          directory may be used containing old "pt"
          resource which gets leaked into the release build
      
      1) is fixed by adding a function to treat "pt" as "pt_BR"
      to support users who were on "pt" since pre-x.39.
      (This is done by finding the closest fallback locale)
      
      2) is fixed by emptying the folder before generating locales
      so any old locales are deleted.
      
      Fixes #16690
      Unverified
      35651360
Loading