This project is mirrored from https://github.com/metabase/metabase.
Pull mirroring updated .
- Oct 21, 2021
-
-
Gustavo Saiani authored
-
Nemanja Glumac authored
Fix #18547 - Wrong id applied to the link to the question added to the dashboard [Activity log] (#18578) * Fix wrong id applied when constructing a link to the question added to the dashboard Closes #18547 * Unskip repro * Add unit tests for question ids * Guard against the dashboard questions with no names
-
Gustavo Saiani authored
-
- Oct 20, 2021
-
-
Cam Saul authored
-
dpsutton authored
also move it down with the other lsp bits
-
Jeff Evans authored
Fix calculation in `ttl-hierarchy` The `ttl-hierarchy` function falls back to `query-magic-ttl` if no stored (card, dashboard, database) cache settings are available. But that function returns its TTL value in milliseconds, while the model level, granular TTLs are stored in hours. Convert those to seconds before returning, so that `ttl-hierarchy` is always returning in seconds Add backend test for this scenario (no stored model ttls, but an average query execution time is available, and hence the `query-magic-ttl` should be used) Unskip Cypress repro
-
Nemanja Glumac authored
-
Alexander Polyankin authored
-
Nemanja Glumac authored
* Register new notebook e2e helper `visualize` * Apply new `visualize` helper to a several different specs as a PoC * Apply `visualize` helper to all related specs
-
Gustavo Saiani authored
-
Ariya Hidayat authored
Also, in case reduced motion is preferred, increase the spring stiffness of the motion to finish the transition faster (timing-wide, the actual animation won't appear due to the above snapping).
-
Pawit Pornkitprasan authored
-
- Oct 19, 2021
-
-
Pawit Pornkitprasan authored
* qp: fetch_source_query: store card-id for each query we want to be able to determine further in the pipeline whether a query came from a card (i.e. saved question) or not * qp: annotate: remove join-alias if source is a card If the source is a card, the front-end should be able to treat it similar to a database view so we should not expose the join aliases outside. If the card is on the right side of the join though, the alias should still exists and refers to the current-level join alias.
-
Jeff Evans authored
Update encryption code to be able to operate on byte arrays directly (without dealing in base64 String representation), delegating existing methods to point to these new ones Adding test
-
Jeff Evans authored
Removing duplicate property declaration from presto-jdbc driver YAML Add test that executes against all drivers to confirm that no duplicate names come out of connection-properties Change the way the test runs to avoid needing to initialize test data namespace (to make googleanalytics happy) Unskip repro Cypress test
-
Dennis Schridde authored
* Fix precondition of change set 97 Without the `type` and with the space Liquibase is unable to parse this precondition. During `lein test` it outputs: ``` [clojure-agent-send-off-pool-0] DEBUG liquibase.changelog - Running Changeset:migrations/000_migrations.yaml::97::senior [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Changeset migrations/000_migrations.yaml::97::senior [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Added 0.32.0 [clojure-agent-send-off-pool-0] INFO liquibase.changelog - Marking ChangeSet: migrations/000_migrations.yaml::97::senior ran despite precondition failure due to onFail='MARK_RAN': liquibase.yaml : DBMS Precondition failed: expected null, got h2 [clojure-agent-send-off-pool-0] DEBUG liquibase.changelog - Skipping ChangeSet: migrations/000_migrations.yaml::97::senior [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Executing with the 'jdbc' executor [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - 1 row(s) affected ``` After this change the output changes to: ``` [clojure-agent-send-off-pool-0] DEBUG liquibase.changelog - Running Changeset:migrations/000_migrations.yaml::97::senior [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Changeset migrations/000_migrations.yaml::97::senior [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Added 0.32.0 [clojure-agent-send-off-pool-0] INFO liquibase.changelog - Marking ChangeSet: migrations/000_migrations.yaml::97::senior ran despite precondition failure due to onFail='MARK_RAN': liquibase.yaml : DBMS Precondition failed: expected mysql,mariadb, got h2 [clojure-agent-send-off-pool-0] DEBUG liquibase.changelog - Skipping ChangeSet: migrations/000_migrations.yaml::97::senior [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - Executing with the 'jdbc' executor [clojure-agent-send-off-pool-0] DEBUG liquibase.executor - 1 row(s) affected ``` For documentation of the syntax cf. https://docs.liquibase.com/concepts/advanced/preconditions.html * Extend migration linter to check dbms preconditions * Also validate the `type` field of the `dbms` precondition Co-authored-by:
dpsutton <dan@dpsutton.com>
-
Noah Moss authored
Normalize field refs in viz settings, rework column ordering approach, and expand test coverage (#18490)
-
Howon Lee authored
Null query runs show up at the top previously. This is because they are a null set. This is a thing that there is an affordance to happen, because you can just create cards without running them in the notebook builder. Coalesces the null to 0 to make things sort right.
-
Alexander Lesnenko authored
* Fix no databases on embedded new question page * Add an explanation just in case, replace lifecycle method
-
Eric Dallo authored
-
Ariya Hidayat authored
Make sure that the built Uberjar contains translations etc. Also, build with "large" resource_class on Circle CI.
-
Howon Lee authored
Underlying problem was that the cardinality of dates were getting limited by the default 1000 limit imposed by EE queries. Whack it for this specific instance. I think if we see it ever again we just remove the 1000 limit instead.
-
Dalton authored
* add more parameters unit tests * add a few meta/Dashboard tests * remove some old, unnecessary comments * fix assertions in tests * fix getParameterTargetField tests * delete nonsensical test
-
- Oct 18, 2021
-
-
Pawit Pornkitprasan authored
Field were not normalized before being processed resulting in the result being `null` Fixes #15737
-
Pawit Pornkitprasan authored
Webpack generate multiple resources with the name of "/[md4-hash].ext". We should allow those to be cached.
-
Ariya Hidayat authored
This reverts commit adb2f715 as it broke Uberjar builds on CircleCI.
-
Cam Saul authored
-
Ariya Hidayat authored
Make sure that the built Uberjar contains translations etc.
-
Howon Lee authored
Tools for fixing errors problems with postgres semantics of limits (blank display of error table) (#18432) Previously sometimes error table blanks out in postgres because of limit semantics of postgres. Get it to not do that by whacking the limit and doing the display of latest error another way.
-
Howon Lee authored
Previously mongo custexps were just columns: you couldn't have group bys with them and you couldn't have filters with them. This allows those, because those weren't really tested before. Also enabled nemanja's tests for them.
-
Noah Moss authored
-
Dalton authored
* fix virtual field access when connecting parameters to dimensions * fix tests * lint fix * maybe keep things a little more backwards compatible * fix cy test * stop messing with field array ids * look for virtual fields on nested questions + tests * fix dashboard mapping test * fix cy test * add parameterToMBQLFilter tests * remove direct FIELD_REF usage * revert a few changes * Update frontend/src/metabase-lib/lib/Dimension.js Co-authored-by:
Gustavo Saiani <gustavo@poe.ma> Co-authored-by:
Gustavo Saiani <gustavo@poe.ma>
-
Ariya Hidayat authored
This reduces the memory pressure since the spans (for the contentEditable) will be flat (before, they construct a tree).
-
dpsutton authored
* Ensure we are paginating resultsets Made big tables in both pg and mysql pg: ```sql create table large_table ( id serial primary key, large_text text ); insert into large_table (large_text) select repeat('Z', 4000) from generate_series(1, 500000) ``` In mysql use the repl: ```clojure (jdbc/execute! (sql-jdbc.conn/db->pooled-connection-spec 5) ["CREATE TABLE large_table (id int NOT NULL PRIMARY KEY AUTO_INCREMENT, foo text);"]) (do (jdbc/insert-multi! (sql-jdbc.conn/db->pooled-connection-spec 5) :large_table (repeat 50000 {:foo (apply str (repeat 5000 "Z"))})) :done) (jdbc/execute! (sql-jdbc.conn/db->pooled-connection-spec 5) ["ALTER TABLE large_table add column properties json default null"]) (jdbc/execute! (sql-jdbc.conn/db->pooled-connection-spec 5) ["update large_table set properties = '{\"data\":{\"cols\":null,\"native_form\":{\"query\":\"SELECT `large_table`.`id` AS `id`, `large_table`.`foo` AS `foo` FROM `large_table` LIMIT 1\",\"params\":null},\"results_timezone\":\"UTC\",\"results_metadata\":{\"checksum\":\"0MnSKb8145UERWn18F5Uiw==\",\"columns\":[{\"semantic_type\":\"type/PK\",\"coercion_strategy\":null,\"name\":\"id\",\"field_ref\":[\"field\",200,null],\"effective_type\":\"type/Integer\",\"id\":200,\"display_name\":\"ID\",\"fingerprint\":null,\"base_type\":\"type/Integer\"},{\"semantic_type\":null,\"coercion_strategy\":null,\"name\":\"foo\",\"field_ref\":[\"field\",201,null],\"effective_type\":\"type/Text\",\"id\":201,\"display_name\":\"Foo\",\"fingerprint\":{\"global\":{\"distinct-count\":1,\"nil%\":0.0},\"type\":{\"type/Text\":{\"percent-json\":0.0,\"percent-url\":0.0,\"percent-email\":0.0,\"percent-state\":0.0,\"average-length\":500.0}}},\"base_type\":\"type/Text\"}]},\"insights\":null,\"count\":1}}'"]) ``` and then from the terminal client repeat this until we have 800,000 rows: ```sql insert into large_table (foo, properties) select foo, properties from large_table; ``` Then can exercise from code with the following: ```clojure (-> (qp/process-query {:database 5 ; use appropriate db and tables here :query {:source-table 42 ;; :limit 1000000 }, :type :query} ;; don't retain any rows, purely just counting ;; so resultset is what retains too many rows {:rff (fn [metadata] (let [c (volatile! 0)] (fn count-rff ([] {:data metadata}) ([result] (assoc-in result [:data :count] @c)) ([result _row] (vswap! c inc) result)))) }) :data :count) ``` PG was far easier to blow up. Mysql took quite a bit of data. Then we just set a fetch size on the result set so that we (hopefully) only have than many rows in memory in the resultset at once. The streaming will write to the download stream as it goes. PG has one other complication in that the fetch size can only be honored if autoCommit is false. The reasoning seems to be that each statement is in a transaction and commits and to commit it has to close resultsets and therefore it has to realize the entire resultset otherwise you would only get the initial page if any. * Set default fetch size to 500 ;; Long queries on gcloud pg ;; limit 10,000 ;; fetch size | t1 | t2 | t3 ;; ------------------------------- ;; 100 | 6030 | 8804 | 5986 ;; 500 | 1537 | 1535 | 1494 ;; 1000 | 1714 | 1802 | 1611 ;; 3000 | 1644 | 1595 | 2044 ;; limit 30,000 ;; fetch size | t1 | t2 | t3 ;; ------------------------------- ;; 100 | 17341 | 15991 | 16061 ;; 500 | 4112 | 4182 | 4851 ;; 1000 | 5075 | 4546 | 4284 ;; 3000 | 5405 | 5055 | 4745 * Only set fetch size if not default (0) Details of `:additional-options "defaultRowFetchSize=3000"` can set a default fetch size and we can easily honor that. This allows overriding per db without much work on our part. * Remove redshift custom fetch size code This removes the automatic insertion of a defaultRowFetchSize=5000 on redshift dbs. Now we always set this to 500 in the sql-jdbc statement and prepared statement fields. And we also allow custom ones to persist over our default of 500. One additional benefit of removing this is that it always included the option even if a user added ?defaultRowFetchSize=300 themselves so this should actually give more control to our users. Profiling quickly on selecting 79,000 rows from redshift, there essentially no difference between a fetch size of 500 (the default) and 5000 (the old redshift default); both were 12442 ms or so. * unused require of settings in redshift tests * Appease the linter * Unnecessary redshift connection details tests
-
Ariya Hidayat authored
The issue has been fixed in the past with PR 15839.
-
Alexander Lesnenko authored
* fix missing description info icon on dashboard cards * update test
-
Alexander Polyankin authored
-
Anton Kulyk authored
* Move revision helpers to own directory * Add simple utility to format revision messages * Add messages for basic dashboard cards changes * Handle null values for dashboard card actions * Add basic card series revision message support * Batch multiple changes in a single revision * Add `isValidRevision` helper * Return title and description instead a single string * Filter out unknown fields in revisions * Fix viz settings revision descriptions * Add helpers to revisions unit tests * Use new revision messages util * Filter out invalid or unknown revisions * Capitalize revision descriptions * Wrap new item name with double-quotes * Move revisions unit tests to source code directory * Add basic HistoryModal tests * Add getChangedFields helper * Revert getRevisionMessage return type back to str * Extend isValidRevision check * Fix getRevisionEventsForTimeline work with updated helper * Expsoe revision utils * Use new messages in HistoryModal * Remove getRevisionDescription function * Handle cases when revision's after / before state is null * Simplify getRevisionMessage * Use "description" instead of "message" * Fix dataset_query revision not parsed correctly * Filter out unknown field change types * Support collection_id change event * Return array of changes instead of batching in a single message * Return JSX from getRevisionEventsForTimeline * Fix UI * Remove console.log * Use "rearranged the cards" message * Fix e2e test using old revision messages * Prefer 'after' state to get changed fields * Fix timeline revision event * Fix translations * Add `key` prop to `jt` * Merge revision files * Add an option not to lowercase the capitalize str * Use updated capitalize function * Fix test string * Display question's "display" change messages * [ci nocache] * Fix tests * [ci nocache]
-
Anton Kulyk authored
* Add custom @testing-library/react render wrapper * Migrate unit tests to custom render function * Rename helper to `renderWithProviders` * Remove irrelevant eslint rule disable
-
- Oct 17, 2021
-
-
Nemanja Glumac authored
-