This project is mirrored from https://github.com/metabase/metabase.
Pull mirroring updated .
- Sep 27, 2023
-
-
Denis Berezin authored
* Fix duplicated api call with better cache check * Fix issues with e2e * Fix e2e tests * Simplify solution * Add entities loader fetch requests queue * Refactor waiting for loaded state to await for promise * Fix e2e test * Fix unit tests
-
Ngoc Khuat authored
-
Uladzimir Havenchyk authored
-
Alexander Polyankin authored
-
Cal Herries authored
-
Cal Herries authored
-
- Sep 26, 2023
-
-
adam-james authored
* Alert API sends all alert information to any user. Change this to only show those pulses the user has created or is recipient of * Adjust tests to show that only admins see all alerts. Regular users only see their alerts. * Change docstring to describe behaviour of the endpoint.
-
Tim Macdonald authored
Co-authored-by:
Ryan Laurie <iethree@gmail.com>
-
Cam Saul authored
* Remove deprecated current-db-time method * Address PR feedback * sync-timezone! should handle java.time.ZoneId and java.time.ZoneOffset * Allow time zone offsets as zone IDs * Fix tests and suppress clj-kondo on a few forms * Suppress warning on one more occurrence of db-default-timezone * Map offset Z to time zone UTC when syncing * Accept zone offsets, zone ids and strings for h2 and oracle * Mark snowflake as not supporting report timezone * Revert "Mark snowflake as not supporting report timezone" This reverts commit 70b15a60334fda707d695bc6c01621961f1c5aa8. * Fix report-timezone handling * Numeric timezone --------- Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com> Co-authored-by:
Tamás Benkő <tamas@metabase.com>
-
lbrdnk authored
Co-authored-by:
Cal Herries <39073188+calherries@users.noreply.github.com>
-
Nick Fitzpatrick authored
* output static viz sources, update be workflow and file paths * build static viz before file check * extending file check timeout * fixing mistake * disable optimization when generating file paths * prefer offline * moving workflows to save 89505 * Maybe caching the build? * removing minification * upload artifact instead of use cache * Add workflows back * reduce files changed timeout * removing unneeded yarn install
-
dpsutton authored
* reintroduce the functions * using affinities had to update the affinities map to handle multiple definitions of the same affinity name. ```clojure core=> (let [affinities (-> ["table" "GenericTable"] dashboard-templates/get-dashboard-template dash-template->affinities-map)] (affinities "HourOfDayCreateDate")) [["HourOfDayCreateDate" {:dimensions ["CreateTimestamp"], :metrics ["Count"], :score 50}] ["HourOfDayCreateDate" {:dimensions ["CreateTime"], :metrics ["Count"], :score 50}]] ``` Here there were two cards defined as "HourOfDateCreateDate". One looked for a createtimestamp, the other for a createtime. When treating the affinities as unique names, the one with createtime clobbered the createtimestamp and then it went unsatisfied. Now we keep both definitions around. The group by is stable so the timestamp comes first and in match that one will be matched. * i don't like reduced for some reason * fixup affinities 1. Affinities were in wrong shape: before: ```clojure {"card-name" [["card-name" definition] ["card-name" definition]] ,,, ``` new shape: ```clojure {"card-name" [definition definition] ,,, ``` ex: ```clojure core=> (let [affinities (-> ["table" "GenericTable"] dashboard-templates/get-dashboard-template dash-template->affinities-map)] (affinities "HourOfDayCreateDate")) ({:dimensions ["CreateTimestamp"], :metrics ["Count"], :score 50} {:dimensions ["CreateTime"], :metrics ["Count"], :score 50}) ``` 2. Erroring on unfindable filters/metrics using `{:filter [:dimension ::unsatisfiable]}` didn't work correctly because of this: ```clojure ;; it doesn't look for dimensions by keyword core=> (dashboard-templates/collect-dimensions [:dimension ::nope]) () core=> (dashboard-templates/collect-dimensions [:dimension (str ::nope)]) (":metabase.automagic-dashboards.core/nope") ``` * Updating dash-template->affinities-map to dash-template->affinities. Two key changes were made: - The card name is now a part of the 'affinity' object (under the `:affinity-name` key). This flattening should also make affinities easier to deal with and more flexible (e.g. arbitrary groupings). - All base dimensions are exploded out into these affinity objects under the `:base-dims` key. Note that this may result in growth of the number of affinities when named items are repeated with different definitions. For example, card names are not unique, resulting in N affinities per card. Also, metrics and filters need not be unique. GenericTable, for example, has 6 definitions of the Last30Days filter. This will result in 6X the number of affinities created for each card using that filter. This change encapsulates affinities, as before you'd need to know what dimensions underlied any given metric or filter. ATM, we do not package each filter or metric definition in each affinity object, but perhaps that would be worth doing in the future for complete encapsulation. This change also allows for very simple matching logic. To match card affinities, for example, you just filter all affinities for which the affinity dimensions are a subset of the provided dimensions. * Updated tests for new affinity code. Removed accidental cruft. * Change shape of affinities The "satisfied-affins" are of this shape: ```clojure {"AverageIncomeByMonth" #{"Income" "Timestamp"}, "AverageDiscountByMonth" #{"Income" "Discount" "Timestamp"}} ``` And they are ordered so it can drive the `make-cards` function in the future. The idea is for right now we'll look up the card based on affinity name, and when multiple cards found, the set of dimensions they depend on. And that will drive the layout. But in the future, just the affinity itself will drive how we make a card layout. This is the firs step towards that. * Satisfied-affinities shape is ordered map to vector of sets of dims ```clojure {"RowcountLast30Days" [#{"CreateTimestamp"}], "Rowcount" [#{}]} ``` Since each could be met in a few definitions in multiple ways * Update comment ```clojure core=> (let [affinities (-> ["table" "GenericTable"] dashboard-templates/get-dashboard-template dash-template->affinities)] (match-affinities affinities {:available-dimensions {"JoinDate" :whatever}})) {"Rowcount" [#{}], "RowcountLast30Days" [#{"JoinDate"}], "CountByJoinDate" [#{"JoinDate"}], "DayOfWeekJoinDate" [#{"JoinDate"}], "DayOfMonthJoinDate" [#{"JoinDate"}], "MonthOfYearJoinDate" [#{"JoinDate"}], "QuerterOfYearJoinDate" [#{"JoinDate"}]} ``` * comment block is helpful
* Drive `make-cards` from affinities old style was make-cards looped over all cards to see which ones were satisfied. Now we've taken a notion of "interestingness" which we call affinities: which dimensions are interesting in combination with each other. Right now these are derived from card combinations but that will change going forward. So now going into the make-cards loop are interesting combinations and we then grab a card-template from the combination. Again, it's a double lookup back to cards but this lets us break that cycle and come up with interesting card templates just based on the groupings themselves. in the future, we're going to want an affinity to produce multiple card-templates so this will become a mapcat of some sorts rather than a map. * Removing pre-check from card-candidates and corresponding unit test as this is not an invariant -- card-candidates should always be satisfied with our affinity mapping. * comment and docstring * Changed names for clarity (affinity -> affinity-sets) and modified the return value of match-affinities to have values of sets of sets rather than vectors of sets. This makes matching simpler and easier. Added a docstring to CardTemplateProvider and started adding some tests. * Revert "Changed names for clarity (affinity -> affinity-sets) and modified" This reverts commit dd2aef1fea8e6deb5f970e51f698e4b72fa97b32. * Something about either the cherry-pick or stale state made the previous change of affinities (as a vector of set) to affinity-sets (a set of sets) cause failures. It may just be that the implementation was broken and the tests passed due to a stale state ¯\_(ツ)_/¯. This picks out the clarity and doc changes and reverts the set of sets. * Adding schemas for the affinity functions in metabase.automagic-dashboards.core * Created the AffinitySetProvider protocol, which, given an item, will provide the set of affinities (a set of set of dimensions) required to bind to that item. The initial implementation reifies the protocol over a dashboard template and provides affinity sets for cards, but this protocol could be extended to provide affinities for whatever object we desire. The initial implementation looks like so: ```clojure (p/defprotocol+ AffinitySetProvider "For some item, determine the affinity sets of that item. This is a set of sets, each underlying set being a set of dimensions that, if satisfied, specify affinity to the item." (create-affinity-sets [this item])) (mu/defn base-dimension-provider :- [:fn #(satisfies? AffinitySetProvider %)] "Takes a dashboard template and produces a function that takes a dashcard template and returns a seq of potential dimension sets that can satisfy the card." [{card-metrics :metrics card-filters :filters} :- ads/dashboard-template] (let [dim-groups (fn [items] (-> (->> items (map (fn [item] [(ffirst item) (set (dashboard-templates/collect-dimensions item))])) (group-by first)) (update-vals (fn [v] (mapv second v))) (update "this" conj #{}))) m->dims (dim-groups card-metrics) f->dims (dim-groups card-filters)] (reify AffinitySetProvider (create-affinity-sets [_ {:keys [dimensions metrics filters]}] (let [dimset (set (map ffirst dimensions)) underlying-dim-groups (concat (map m->dims metrics) (map f->dims filters))] (set (map (fn [lower-dims] (reduce into dimset lower-dims)) (apply math.combo/cartesian-product underlying-dim-groups)))))))) ``` * Adding specs for dashcard, dashcards, context (minimal) and instrumenting make-cards. I'm wondering if we should move card-candidates into the layout-producer protocol since we're using known affinities to make a thing, not to make a baby step to make a thing. The one gotcha in this is that there's a positional index inserted in there which IDK how we use other than maybe layout ATM. * Added more schemas and externalized the matching of affinities to potential dimensions (or any map conforming to the a map of dimension names to matches of items). This generalizes our ability to match of affinity groups with "things" that we want to generate. * Renaming output of all-satisfied-bindings to satisfied-bindings to prevent variable shadowing error. --------- Co-authored-by:Mark Bastian <markbastian@gmail.com>
-
Cal Herries authored
-
Tim Macdonald authored
[Fixes #34080]
-
Nicolò Pretto authored
Revert "show columns added after the dashcard has been created in the viz settings (#33886)" (#34116)
-
Ngoc Khuat authored
* Fix the flaky test 'downgrade-dashboard-tabs-test' by bumping the migration version we upgrade to during the test. I'm not fully sure why it was flaky in the first place. For some reasons the rollback step doesn't get triggered on MySQL even though liqubiase does acknowledge that the migration got ran?? eitherway bumping the version seems to take the flaky a way.
-
Cal Herries authored
-
Mahatthana (Kelvin) Nomsawadi authored
-
- Sep 25, 2023
-
-
Uladzimir Havenchyk authored
-
Braden Shepherdson authored
Fixes #32373.
-
Cam Saul authored
Support nested transactions for app DB --------- Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com> Co-authored-by:
Ngoc Khuat <qn.khuat@gmail.com>
-
shaun authored
-
Uladzimir Havenchyk authored
* Lazy load EntityForm * Replace lazy loading with require * add a comment
-
Cal Herries authored
-
Oisin Coveney authored
-
Cal Herries authored
-
Tim Macdonald authored
* Fix typo in comment * Kick CI * Run backport reminder on every push to an open PR * Kick CI again with Nemanja's magic --------- Co-authored-by:
Nemanja <31325167+nemanjaglumac@users.noreply.github.com>
-
Tim Macdonald authored
Set `last_acknowledged_version` automatically for new users This prevents new users from being told about what's new in Metabase when…everything is new in Metabase for them
-
Alexander Polyankin authored
-
Nicolò Pretto authored
* new implementation after rebase, also works for table details * fix: makes #28304 pass again * fix settings undefined crash * use already defined variable * get columns from settings param * fix test data * add unit test for metabase#28304
-
Oisin Coveney authored
-
- Sep 22, 2023
-
-
Nemanja Glumac authored
* Reproduce #33996
-
Case Nelson authored
* [MLv2] Add query can-run function * Update src/metabase/lib/native.cljc Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com> * Add test for non-native query * Fix tests - snippet-id can be optional --------- Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com>
-
dpsutton authored
* Bump h2 to fix flaky test Symptom: -------- error in linter job: ``` With premium token features = #{"audit-app"} :metabase-enterprise.audit-app.pages.dashboards/most-popular-with-avg-speed expected: (schema= {:status (s/eq :completed), s/Keyword s/Any} (qp/process-query query)) actual: clojure.lang.ExceptionInfo: Error reducing result rows: Error running audit query: Invalid value "NULL" for parameter "result FETCH"; SQL statement: WITH "MOST_POPULAR" AS (SELECT "D"."ID" AS "DASHBOARD_ID", "D"."NAME" AS "DASHBOARD_NAME", COUNT(*) AS "VIEWS" FROM "VIEW_LOG" AS "VL" LEFT JOIN "REPORT_DASHBOARD" AS "D" ON "VL"."MODEL_ID" = "D"."ID" WHERE "VL"."MODEL" = 'dashboard' GROUP BY "D"."ID" ORDER BY COUNT(*) DESC LIMIT ?), "CARD_RUNNING_TIME" AS (SELECT "QE"."CARD_ID", AVG("QE"."RUNNING_TIME") AS "AVG_RUNNING_TIME" FROM "QUERY_EXECUTION" AS "QE" WHERE "QE"."CARD_ID" IS NOT NULL GROUP BY "QE"."CARD_ID"), "DASH_AVG_RUNNING_TIME" AS (SELECT "D"."ID" AS "DASHBOARD_ID", AVG("RT"."AVG_RUNNING_TIME") AS "AVG_RUNNING_TIME" FROM "REPORT_DASHBOARDCARD" AS "DC" LEFT JOIN "CARD_RUNNING_TIME" AS "RT" ON "DC"."CARD_ID" = "RT"."CARD_ID" LEFT JOIN "REPORT_DASHBOARD" AS "D" ON "DC"."DASHBOARD_ID" = "D"."ID" WHERE "D"."ID" IN (SELECT "DASHBOARD_ID" FROM "MOST_POPULAR") GROUP BY "D"."ID") SELECT "MP"."DASHBOARD_ID", "MP"."DASHBOARD_NAME", "MP"."VIEWS", "RT"."AVG_RUNNING_TIME" FROM "MOST_POPULAR" AS "MP" LEFT JOIN "DASH_AVG_RUNNING_TIME" AS "RT" ON "MP"."DASHBOARD_ID" = "RT"."DASHBOARD_ID" ORDER BY "MP"."VIEWS" DESC LIMIT ? [90008-214] <stack traces of us catching and rethrowing> Caused by: org.h2.jdbc.JdbcSQLDataException: Invalid value "NULL" for parameter "result FETCH"; SQL statement: WITH "MOST_POPULAR" AS (SELECT "D"."ID" AS "DASHBOARD_ID", "D"."NAME" AS "DASHBOARD_NAME", COUNT(*) AS "VIEWS" FROM "VIEW_LOG" AS "VL" LEFT JOIN "REPORT_DASHBOARD" AS "D" ON "VL"."MODEL_ID" = "D"."ID" WHERE "VL"."MODEL" = 'dashboard' GROUP BY "D"."ID" ORDER BY COUNT(*) DESC LIMIT ?), "CARD_RUNNING_TIME" AS (SELECT "QE"."CARD_ID", AVG("QE"."RUNNING_TIME") AS "AVG_RUNNING_TIME" FROM "QUERY_EXECUTION" AS "QE" WHERE "QE"."CARD_ID" IS NOT NULL GROUP BY "QE"."CARD_ID"), "DASH_AVG_RUNNING_TIME" AS (SELECT "D"."ID" AS "DASHBOARD_ID", AVG("RT"."AVG_RUNNING_TIME") AS "AVG_RUNNING_TIME" FROM "REPORT_DASHBOARDCARD" AS "DC" LEFT JOIN "CARD_RUNNING_TIME" AS "RT" ON "DC"."CARD_ID" = "RT"."CARD_ID" LEFT JOIN "REPORT_DASHBOARD" AS "D" ON "DC"."DASHBOARD_ID" = "D"."ID" WHERE "D"."ID" IN (SELECT "DASHBOARD_ID" FROM "MOST_POPULAR") GROUP BY "D"."ID") SELECT "MP"."DASHBOARD_ID", "MP"."DASHBOARD_NAME", "MP"."VIEWS", "RT"."AVG_RUNNING_TIME" FROM "MOST_POPULAR" AS "MP" LEFT JOIN "DASH_AVG_RUNNING_TIME" AS "RT" ON "MP"."DASHBOARD_ID" = "RT"."DASHBOARD_ID" ORDER BY "MP"."VIEWS" DESC LIMIT ? [90008-214] at org.h2.message.DbException.getJdbcSQLException (DbException.java:646) org.h2.message.DbException.getJdbcSQLException (DbException.java:477) org.h2.message.DbException.get (DbException.java:223) org.h2.message.DbException.getInvalidValueException (DbException.java:298) org.h2.command.query.Query.getOffsetFetch (Query.java:912) org.h2.command.query.Select.queryWithoutCache (Select.java:768) org.h2.command.query.Query.queryWithoutCacheLazyCheck (Query.java:197) org.h2.command.query.Query.query (Query.java:512) ... ``` BUT this only causes an issue for tests run under the cloverage linter. It's not an issue under any other test scenarios. Reproduction: ------------- I can get the same stack traces with the following: ```clojure dashboards=> (clojure.java.jdbc/query {:datasource (:data-source metabase.db.connection/*application-db*)} ["select 1 limit null"]) Execution error (JdbcSQLDataException) at org.h2.message.DbException/getJdbcSQLException (DbException.java:646). Invalid value "NULL" for parameter "result FETCH"; SQL statement: select 1 limit null [90008-214] dashboards=> (clojure.java.jdbc/query {:datasource (:data-source metabase.db.connection/*application-db*)} ["select 1 limit ?" nil]) Execution error (JdbcSQLDataException) at org.h2.message.DbException/getJdbcSQLException (DbException.java:646). Invalid value "NULL" for parameter "result FETCH"; SQL statement: select 1 limit ? [90008-214] ``` But with a normal repl, running the test does not error: ```clojure (deftest all-queries-test (mt/with-test-user :crowberto (with-temp-objects [objects] (premium-features-test/with-premium-features #{:audit-app} ;; limit to just the version (doseq [query-type #_(all-query-methods) [:metabase-enterprise.audit-app.pages.dashboards/most-popular-with-avg-speed]] (testing query-type (do-tests-for-query-type query-type objects))))))) pages-test=> (clojure.test/run-test all-queries-test) Testing metabase-enterprise.audit-app.pages-test Ran 1 tests containing 1 assertions. 0 failures, 0 errors. {:test 1, :pass 1, :fail 0, :error 0, :type :summary} ``` And even more frustrating, just grabbing the sql and params and calling the same code: ```clojure common=> (let [driver (mdb/db-type) sql check/sql params check/params] (with-open [conn (.getConnection mdb.connection/*application-db*) stmt (doto (sql-jdbc.execute/prepared-statement driver conn sql params) foo ) rs (sql-jdbc.execute/execute-prepared-statement! driver stmt)] (into [] (clojure.java.jdbc/result-set-seq rs)))) [] ``` I can get the test to fail by running the cloverage runner with a socket repl started and then running the test during that instrumented run: ``` clojure -J"$(socket-repl 6000)" -X:dev:ee:ee-dev:test:cloverage ``` But I'm not able to find anything interesting. It's a bug in CTEs with parameters in h2, and it seems that only this query is sensitive. If I inline the value it works under 2.1.214. And if I bump the version to 2.2.224 it works. So I'm bumping the version. * inline the offending value * Revert "Bump h2 to fix flaky test" This reverts commit 9cd6aed19cc4c143f79e611bb0c92218f18da365.
-
Case Nelson authored
* [MLv2] Handle coalesce type after conversion * Update test/metabase/lib/aggregation_test.cljc Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com> * Only update fields on converted queries * Don't convert if base-type exists * Guard against non-numeric field ids * Fix test with hardcoded ids * Allow metadata/field to return nil on not found * Add ipaddress and mongobsonid types to equality-comparable-types * Support is-empty not-empty for mongobsonid * Add base-type to FE tests * Fix e2e tests - column metadata is valid with field types filled in --------- Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com>
-
Case Nelson authored
* [MLv2] Add function for database engine * Update src/metabase/lib/js.cljs Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com> * Use native query --------- Co-authored-by:
metamben <103100869+metamben@users.noreply.github.com>
-
Case Nelson authored
* [MLv2] Fix template-tags snippet-id Looking at this bug I found there were a couple problems that the converters were not properly handling. First, the conversion to js was copying snake case to kebab case instead of renaming. Second the conversion to clojure was not picking up kebab case keys but using the snake case keys. So I dropped the TemplateTag encoder and decoder and went forward with js->clj and clj->js since the only tricky part was keeping the tag names as strings and keeping the :type as keyword. * Address review comments
-
Uladzimir Havenchyk authored
-
Uladzimir Havenchyk authored
-
Nemanja Glumac authored
* Make archived item rely on the `model` for its type The `type` prop doesn't exist in the list of items. `models` on the other hand exists and it shows the underlying item model, which almost 1:1 matches the desired type. Almost, because we can questions "card" in the code. That will be accounted for in the next commit. * Handle `card` and `dataset` models Show "question` and "model" in the UI instead. * Move `getTranslatedEntityName` to the common utils * Use translated entities for the archived item tooltip * Switch back to the implicit return * Lowercase the translated model name * Add missing string translation * Add unit test for `getTranslatedEntityName`
-