This project is mirrored from https://github.com/metabase/metabase.
Pull mirroring updated .
- Feb 02, 2024
-
-
Nemanja Glumac authored
Resolves #38301
-
- Feb 01, 2024
-
-
Nemanja Glumac authored
* Upgrade `actions/download-artifact` action to v4 * Upgrade `actions/upload-artifact` action to v4
-
- Jan 31, 2024
-
-
Nemanja Glumac authored
-
- Jan 24, 2024
-
-
Ryan Laurie authored
Co-authored-by:
Vladimir Zaytsev <vladimir@trunk.io>
-
- Dec 08, 2023
-
-
Ryan Laurie authored
-
- Nov 30, 2023
-
-
Ngoc Khuat authored
-
- Nov 14, 2023
-
-
Nemanja Glumac authored
-
- Nov 13, 2023
-
-
Nemanja Glumac authored
-
- Nov 10, 2023
-
-
Nemanja Glumac authored
Follow up after #34209
-
- Nov 07, 2023
-
-
Ryan Kienstra authored
Co-authored-by:
Cam Saul <github@camsaul.com>
-
- Oct 11, 2023
-
-
Luis Paolini authored
* Test Java 21 * Bring back Java 17 * Run Java 21 only on `master` --------- Co-authored-by:
Nemanja <31325167+nemanjaglumac@users.noreply.github.com>
-
- Oct 09, 2023
-
-
Tim Macdonald authored
-
- Sep 29, 2023
-
-
Nemanja Glumac authored
Before 1. prepare-frontend 2. prepare-backend 3. build-static-viz 4. files-changed-check 5. static-viz-files-changed-check So, we're doing all these expensive operations before the step 4, even though they might not even be necessary. If files changed already outputs backend_all, this means we have to run these tests whether or not there were static-viz changes. After 1. test which files changed - if backend_all output -> skip static-viz-bundle and start running tests straight away - else if there are no direct backend changes, have an additional check for static-viz 2. static-viz-files-changed - if true -> run tests - else -> skip tests
-
- Sep 26, 2023
-
-
Nick Fitzpatrick authored
* output static viz sources, update be workflow and file paths * build static viz before file check * extending file check timeout * fixing mistake * disable optimization when generating file paths * prefer offline * moving workflows to save 89505 * Maybe caching the build? * removing minification * upload artifact instead of use cache * Add workflows back * reduce files changed timeout * removing unneeded yarn install
-
- Aug 29, 2023
-
-
Vamsi Peri authored
* Run Cloverage job only on merge to `master` * Use shorter `github.ref_name` * [Reopened] Run Cloverage job only on merge to master --------- Co-authored-by:
Nemanja <31325167+nemanjaglumac@users.noreply.github.com>
-
- Aug 21, 2023
-
-
Vamsi Peri authored
This reverts commit a5776529.
-
Nemanja Glumac authored
-
- Aug 02, 2023
-
-
Cam Saul authored
-
- Jul 04, 2023
-
-
john-metabase authored
This reverts commit 9da0bd76.
-
- Jun 20, 2023
-
-
john-metabase authored
-
- Jun 08, 2023
-
-
Nemanja Glumac authored
We're seeing increasing number of timeouts in CI without any significant changes in the test suite. This could be due to GitHub's runners performance. [ci skip]
-
- Jun 04, 2023
-
-
Ariya Hidayat authored
-
- May 24, 2023
-
-
Ryan Laurie authored
* indexed entities schema * schema * endpoint with create and delete * hooked up tasks * first tests and hooking it all together * some e2e tests * search * search had to rename value -> name. The search stuff works well for stuff that looks the same. And previously was renaming `:value` to name in the search result, but it uses the searchable-columns-for-model for which values to score on and wasn't finding :value. So easier to just let name flow through. * include model_pk in results * send pk back as id, include pk_ref * rename fk constraint (had copied it) * populate indexed values with name not value * GET http://localhost:3000/api/model_index?model_id=135 * populate indexed values on post done here for ease of demo. followup work will schedule the task immediately, but for now, do it on thread and return info * test search results * fix t2 delete syntax on h2 * fix h2 * tests work on h2 and postgres app dbs * Fix after insert method after method lets you change the data. and i was returning the results of the refresh job as the model-index. not great * unify some cross-db logic for inserting model index values * Extract shared logic in populating indexed values; mysql support * clean ns * I was the bug (endpoint returns the item now) * fix delete to check perms on model not model-index * fix tests * Fix tests * ignore unused private var * Add search tests * remove tap> * Tests for simple mbql, joined mbql, native * shutdown scheduler after finishing * Entity Search + Indexing Frontend (#30082) * add model index types and mocks * add integer type check * add new entities for model indexes * expose model index toggle in model metadata * add search description for indexed entities * add an error boundary to the app bar * e2e test entity indexes * update tests and types * first tests and hooking it all together * Renumber migrations i forgot to claim them and i am a bad person * Restore changes lost in the rebase * add sync task to prevent errors in temp without this you get errors when the temp db is created as it wants to set the sync and analyze jobs but those jobs haven't been started. * Restore and test GET model-index/:model_id * clean up api: get by model id or model-index-id * update e2e tests * ensure correct types on id-ref and value-ref * simplify lookup * More extensive testing * update types * reorder migrations * fix tests * empty to trigger CI * update types * Bump clj-kondo old version reports /work/src/metabase/models/model_index.clj:27:6: error: #'metabase.models.interface/add-created-at-timestamp is private on source: ```clojure (t2/define-before-insert ::created-at-timestamp [instance] #_{:clj-kondo/ignore [:private-call]} ;; <- ignores this override (#'mi/add-created-at-timestamp instance)) ``` * Move task assertions to model namespace the task system is initialized in the repl so these all work locally. But in CI that's not necessarily the case. And rather than mocking the task system again I just leave it in the hands of the other namespace. * Don't serialize ModelIndex and ModelIndexValue seems like it is a setting on an individual instance. No need to have that go across boundaries * Just test this on h2. chasing names across the different dbs is maddening. and the underlying db doesn't really matter * indexes on model_index tables - `model_index.model_id` for searching by model_id - `model_index_value.model_index_id` getting values for a particular index * copy/paste error in indexing error message * `mt/with-temp` -> `t2.with-temp/with-temp` * use `:hook/created-at-timestamped?` * move field-ref normalization into models.interface * nit: alignment * Ensure write access to underlying model for create/delete * Assert more about pk/value refs and better error messages * Don't search indexed entities when user is sandboxed Adds a `= 1 0` clause to indexed-entity search if the user is sandboxed. Also, got tired of the pattern: ```clojure (if-let [segmented-user? (resolve 'metabase-enterprise.sandbox.api.util/segmented-user?)] (if (segmented-user?) :sandboxed-user :not-sandboxed-user) :not-sandboxed-user) ``` and now we have a tasty lil ole' ```clojure (defenterprise segmented-user? metabase-enterprise.sandbox.api.util [] false) ``` that you can just ```clojure (ns foo (:require [metabase.public-settings.premium-features :as premium-features])) (if (premium-features/segmented-user?) :sanboxed :not-sandboxed) ``` * redef premium-features/segmented-user? * trigger CI * Alternative GEOJSON.io fix (#30675) * Alternative GEOJSON.io fix rather than not testing, just accept that we might get an expected location back or we might get nil. This actually represents how the application will behave and will seamlessly work as the service improves or continues throwing errors: ```shell ❯ curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73,185.233.100.23" <html> <head><title>500 Internal Server Error</title></head> <body> <center><h1>500 Internal Server Error</h1></center> <hr><center>openresty</center> </body> </html> ❯ curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73" [{"area_code":"0","organization_name":"GOOGLE","country_code":"US", "country_code3":"USA","continent_code":"NA","ip":"8.8.8.8", "region":"California","latitude":"34.0544","longitude":"-118.2441", "accuracy":5,"timezone":"America\/Los_Angeles","city":"Los Angeles", "organization":"AS15169 GOOGLE","asn":15169,"country":"United States"}, {"area_code":"0","organization_name":"GOOGLE-FIBER","country_code":"US", "country_code3":"USA","continent_code":"NA","ip":"136.49.173.73", "region":"Texas","latitude":"30.2423","longitude":"-97.7672", "accuracy":5,"timezone":"America\/Chicago","city":"Austin", "organization":"AS16591 GOOGLE-FIBER","asn":16591,"country":"United States"}] ``` Changes are basically ```clojure (schema= (s/conditional some? <original-expectation-schema> nil? (s/eq nil) response-from-geojson) ``` * Filter to login error messages ```clojure (into [] (map (fn [[log-level error message]] [log-level (type error) message])) error-messages-captured-in-test) [[:error clojure.lang.ExceptionInfo "Error geocoding IP addresses {:url https://get.geojs.io/v1/ip/geo.json?ip=127.0.0.1}"] [:error clojure.lang.ExceptionInfo "Authentication endpoint error"]] ``` * check timestamps with regex seems like they fixed the service ```shell curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73,185.233.100.23 " [{"country":"United States","latitude":"34.0544","longitude":"-118.2441","accuracy":5,"timezone":"America\/Los_Angeles","ip":"8.8.8.8","organization":"AS15169 GOOGLE","country_code3":"USA","asn":15169,"area_code":"0","organization_name":"GOOGLE","country_code":"US","city":"Los Angeles","continent_code":"NA","region":"California"},{"country":"United States","latitude":"30.2423","longitude":"-97.7672","accuracy":5,"timezone":"America\/Chicago","ip":"136.49.173.73","organization":"AS16591 GOOGLE-FIBER","country_code3":"USA","asn":16591,"area_code":"0","organization_name":"GOOGLE-FIBER","country_code":"US","city":"Austin","continent_code":"NA","region":"Texas"},{"country":"France","latitude":"48.8582","longitude":"2.3387","accuracy":500,"timezone":"Europe\/Paris","ip":"185.233.100.23","organization":"AS198985 AQUILENET","country_code3":"FRA","asn":198985,"area_code":"0","organization_name":"AQUILENET","country_code":"FR","continent_code":"EU"}] ``` whereas a few hours ago that returned a 500. And now the timestamps are different ``` ;; from schema "2021-03-18T19:52:41.808482Z" ;; results "2021-03-18T20:52:41.808482+01:00" ``` kinda sick of dealing with this and don't want to deal with a schema that matches "near" to a timestamp. * Restore these i copied changes over from tim's fix and it included these fixes. But now the service is fixed and these are the original values from cam's original fixes * Separate FK reference because of index ERROR in metabase.db.schema-migrations-test/rollback-test (ChangeLogIterator.java:126) Uncaught exception, not in assertion. liquibase.exception.LiquibaseException: liquibase.exception.RollbackFailedException: liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`] liquibase.exception.RollbackFailedException: liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`] liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`] java.sql.SQLTransientConnectionException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint SQLState: "HY000" errorCode: 1553 * delete cascade on model_index fk to report_card * reorder fk and index migrations * break apart FK from table definition for rollbacks * update types and appease the new lint rules * define models in a toucan2 way * remove checksum: ANY from migrations * api permissions 403 tests * test how we fetch values * remove empty line, add quotes to migration file * cleanup from tamas's comments - indention fix - inline an inc - add values in a tx (also catch errors and update model_index) - clarify docstring on `should-deindex?` - don't use declare, just define fn earlier * update after merge * update to new model name * only test joins for drivers that support it * don't test mongo, include scenario in test output * sets use `disj` not `dissoc` * handle oracle primary id type `[94M "Awesome Bronze Plate"]` (type 94M) -> java.math.BigDecimal * remove magic value of 5000 for threshold * Reorder unique constraint and remove extra index previously had a unique constraint (model_pk, model_index_id) and an index on model_index_id. If we reorder the unique constraint we get the index on the model_index_id for free. So we'll take it. * remove breakout. without breakout ```sql SELECT "source"."id" AS "ID", "source"."title" AS "TITLE" FROM (SELECT "PUBLIC"."products"."id" AS "ID", "PUBLIC"."products"."title" AS "TITLE" FROM "PUBLIC"."products") AS "source" ORDER BY "source"."title" DESC ``` with breakout ```sql SELECT "source"."id" AS "ID", "source"."title" AS "TITLE" FROM (SELECT "PUBLIC"."products"."id" AS "ID", "PUBLIC"."products"."title" AS "TITLE" FROM "PUBLIC"."products") AS "source" GROUP BY "source"."id", "source"."title" ORDER BY "source"."title" DESC, "source"."id" ASC ``` * restore breakout caused some tests with joins to fail * update model-index api mock * Simpler method for indexing Remove the `generation` part. We'll do the set operations in memory rather than relying on db upserts. Now all app-dbs work the same and we get all indexed values, diff against current values from the model, and retract and add the appropriate values. "Updates" are counted as a retraction and then addition rather than separately trying to update a row. Some garbage generation stats: Getting garbage samples with ```clojure (defn tuple-stats [] (let [x (-> (t2/query ["select n_live_tup, n_dead_tup, relname from pg_stat_all_tables where relname = 'model_index_value';"]) (first))] {:live (:n_live_tup x) :dead (:n_dead_tup x)})) ``` And using a simple loop to index the values repeatedly: ```clojure (reduce (fn [stats _] (model-index/add-values! model-index) (conj stats (tuple-stats))) [] (range 20)) ``` With generation style: ```clojure [{:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 400} {:live 200, :dead 400} {:live 200, :dead 400} {:live 200, :dead 400} {:live 200, :dead 770} {:live 200, :dead 770} {:live 200, :dead 787} {:live 200, :dead 787} {:live 200, :dead 825} {:live 200, :dead 825} {:live 200, :dead 825} {:live 200, :dead 825} {:live 200, :dead 818} {:live 200, :dead 818} {:live 200, :dead 818} {:live 200, :dead 818} {:live 200, :dead 854}] ``` With "dump and reload": ```clojure [{:live 200, :dead 0} {:live 200, :dead 200} {:live 200, :dead 200} {:live 200, :dead 600} {:live 200, :dead 600} {:live 200, :dead 600} {:live 200, :dead 800} {:live 200, :dead 800} {:live 200, :dead 800} {:live 200, :dead 800} {:live 200, :dead 800} {:live 200, :dead 1600} {:live 200, :dead 1600} {:live 200, :dead 1600} {:live 200, :dead 2200} {:live 200, :dead 2200} {:live 200, :dead 2200} {:live 200, :dead 2200} {:live 200, :dead 2200} {:live 200, :dead 3600}] ``` And of course now that it's a no-op, ```clojure [{:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0}] ``` Since no db actions have taken when reindexing. We've traded memory set operations for garbage in the database. Since we're capped at 5000 rows it seems fine for now. * add analogs to `isInteger` to constants.cljc and isa.cljc files * clarity and do less work on set conversion * exclude clojure.core/integer? from isa.cljc * Rename `state_change_at` -> `indexed_at` * breakout brings along an order-by and fields clause --------- Co-authored-by:
dan sutton <dan@dpsutton.com>
-
- May 23, 2023
-
-
Cam Saul authored
Improve `type-of` calculation for arithmetic expressions; fix schema for temporal arithmetic (#30921) * Fix #29946 * Sort namespaces
-
- Apr 19, 2023
-
-
Cam Saul authored
-
- Apr 04, 2023
-
-
Nemanja Glumac authored
* Increase the BE timeout * Increase the drivers timeout [ci skip]
-
- Mar 02, 2023
-
-
Nemanja Glumac authored
-
Nemanja Glumac authored
-
- Feb 24, 2023
-
-
Nemanja Glumac authored
-
- Feb 23, 2023
-
-
Nemanja Glumac authored
-
- Feb 20, 2023
-
-
Tim Macdonald authored
Especially helpful for work @camsaul is doing: https://github.com/clj-kondo/clj-kondo/issues/1996
-
- Feb 17, 2023
-
-
john-metabase authored
* Removes :presto driver and tests * Merges :presto-common into :presto-jdbc * Adds migration to update presto databases to presto-jdbc --------- Co-authored-by:
Cam Saul <github@camsaul.com>
-
Nemanja Glumac authored
This makes it possible to have these matrix-generated jobs mark as required, even in situations when file-path filter doesn't trigger the original test.
-
- Feb 10, 2023
-
-
Nemanja Glumac authored
-
- Feb 09, 2023
-
-
Aleksandr Lesnenko authored
-
- Feb 08, 2023
-
-
Braden Shepherdson authored
This is needed with the move to `deps.edn` for tracking all dependencies, since it makes shadow-cljs rely on the `clojure` binary.
-
Cam Saul authored
-
- Jan 23, 2023
-
-
Tim Macdonald authored
This version fixes a bug where Malli was generating types unknown to Kondo and Kondo was flagging an erroneous type mismatch. See the changelog for more details: https://github.com/clj-kondo/clj-kondo/blob/master/CHANGELOG.md#20230120
-
- Jan 18, 2023
-
-
Tim Macdonald authored
* Bump clj-kondo to 2022.12.10 [Fixes #27512] * Use Kondo's :config-in-call instead of grosser hacks
-
- Dec 28, 2022
-
-
Nemanja Glumac authored
This reverts commit a63f9bd1.
-