This project is mirrored from https://github.com/metabase/metabase.
Pull mirroring updated .
- Jun 21, 2023
-
-
Uladzimir Havenchyk authored
-
- Jun 20, 2023
-
-
Ryan Laurie authored
-
john-metabase authored
-
Nemanja Glumac authored
-
- Jun 16, 2023
-
-
Nemanja Glumac authored
I forgot to do this in #31432. As the result, some PRs have been stuck in CI due to the required check. This PR will run the fallback job for `actions` E2E group when changed files don't require E2E tests to run. [ci skip]
-
- Jun 15, 2023
-
-
Lena authored
Previously: one “feature or project” template with the `.Epic` label automatically attached. Now: two templates, one for epics, one for smaller things that don't require milestones (or `.Epic` label; they get `.Task` instead). Squashed commit messages follow: * Remove unused optional field * Update title to current convention * Fix typo * Remove unused section * Change capitalization for consistency * Remove redundant description * Rename the issue template to mention epic * Shorten the description of the issue template * Lighten up the milestones section Unfortunately, it doesn't seem possible to prepopulate the issue with an empty tasklist (https://github.com/orgs/community/discussions/50502). * Add an issue template for tasks [ci skip] We had one for epics, with milestones and `.Epic` label. Pretty often that label was accidentally left on smaller issues that were created using the same template. Now the smaller issues have their own template and their own label. Yay. https://metaboat.slack.com/archives/C04CGFHCJDD/p1686548875239219 * Remove redundant angle brackets * Use a string to get a valid yaml file Without the quotes, github says `Error in user YAML: (<unknown>): did not find expected key while parsing a block mapping at line 1 column 1` * Remove rarely if ever used placeholders * Call out optional-ness of some links * Contextualize a header Context can be some text, a slack thread, a notion page, another issue, whatever.
-
- Jun 14, 2023
-
-
Nemanja Glumac authored
* Extract `questionDetails` * Move `public-sharing` to the `sharing` folder * Remove redundant test This part was covered in `public-sharing.cy.spec.js`. * Create a separate `public-dashboard` E2E spec - Extract relevant part from `public.cy.spec.js` - Make all tests in `public-dashboard.cy.spec.js` run in isolation * Make tests in `public.cy.spec.js` run in isolation * Remove redundant wait * Limit the query results to speed test up * Merge public questions E2E tests * Merge `downloads` with the `sharing` E2E group * Remove `downloads` from the folder list * Refactor `public-dashboard.cy.spec.js` - Clean up linter warnings by using semantic selectors - Speed test up by using API
-
- Jun 12, 2023
-
-
Roman Abdulmanov authored
-
Nemanja Glumac authored
Stress-testing a spec with a lot of tests can easily time out before 20 minutes mark. This commit is bumping that timeout to 60 minutes. [ci skip]
-
- Jun 08, 2023
-
-
Nemanja Glumac authored
* Move core actions E2E tests to a separate folder * Run new `actions` E2E group in CI Metabase actions deserve their own E2E group, as the test suite around them is expected to grow. This commit introduces `actions` E2E group to the CI.
-
Nemanja Glumac authored
We're seeing increasing number of timeouts in CI without any significant changes in the test suite. This could be due to GitHub's runners performance. [ci skip]
-
- Jun 06, 2023
-
-
Ariya Hidayat authored
-
- Jun 04, 2023
-
-
Ariya Hidayat authored
-
- Jun 02, 2023
-
-
Nemanja Glumac authored
Simply invoking `.bin/build.sh` will produce the uberjar without a version. This is why @ariya created #30915 in the first place. But the build script already takes `:version` and `:edition` arguments. That is exactly what this PR is doing rather than manually unzipping the uberjar in order to tweak its `version.properties` file.
-
- Jun 01, 2023
-
-
Nemanja Glumac authored
[ci skip]
-
Nemanja Glumac authored
@jaril informed me that Replay.io is missing metadata from Metabase E2E tests. Apparently trying to access system environment variable from Cypress support file didn't work. The solution for this is to prefix the environment variable with `CYPRESS_` and then to read it using `Cypress.env()` This is what Cypress documentation suggests: https://docs.cypress.io/guides/guides/environment-variables#Option-3-CYPRESS_ [ci skip]
-
- May 31, 2023
-
-
Ariya Hidayat authored
-
- May 30, 2023
-
-
Ariya Hidayat authored
-
- May 29, 2023
-
-
Nemanja Glumac authored
-
- May 26, 2023
-
-
Nemanja Glumac authored
* Install `replay.io` library * Register `replay.io` in the Cypress config * Run E2E tests using Replay chromium browser but only in CI * Upload Replay.io recordings to the dashboard * Manually install Replay.io browser * Always upload recordings * Pass in a custom test run id * Disable asserts and send replays to a separate team * Upload OSS recordings as well * Use specific Ubuntu version * Record and run Replay.io on `master` only * Do not toggle CI browsers in the config * Test run: pass `replay-chromium` browser as CLI flag in a run * Fix multi-line command * Use replay plugin conditionally * Set the flag correctly * Require node process * Remove sourcemap vars * Record using replay.io on schedule * Explicitly name replay runs --------- Co-authored-by:
Jaril <jarilvalenciano@gmail.com>
-
- May 24, 2023
-
-
Ryan Laurie authored
* indexed entities schema * schema * endpoint with create and delete * hooked up tasks * first tests and hooking it all together * some e2e tests * search * search had to rename value -> name. The search stuff works well for stuff that looks the same. And previously was renaming `:value` to name in the search result, but it uses the searchable-columns-for-model for which values to score on and wasn't finding :value. So easier to just let name flow through. * include model_pk in results * send pk back as id, include pk_ref * rename fk constraint (had copied it) * populate indexed values with name not value * GET http://localhost:3000/api/model_index?model_id=135 * populate indexed values on post done here for ease of demo. followup work will schedule the task immediately, but for now, do it on thread and return info * test search results * fix t2 delete syntax on h2 * fix h2 * tests work on h2 and postgres app dbs * Fix after insert method after method lets you change the data. and i was returning the results of the refresh job as the model-index. not great * unify some cross-db logic for inserting model index values * Extract shared logic in populating indexed values; mysql support * clean ns * I was the bug (endpoint returns the item now) * fix delete to check perms on model not model-index * fix tests * Fix tests * ignore unused private var * Add search tests * remove tap> * Tests for simple mbql, joined mbql, native * shutdown scheduler after finishing * Entity Search + Indexing Frontend (#30082) * add model index types and mocks * add integer type check * add new entities for model indexes * expose model index toggle in model metadata * add search description for indexed entities * add an error boundary to the app bar * e2e test entity indexes * update tests and types * first tests and hooking it all together * Renumber migrations i forgot to claim them and i am a bad person * Restore changes lost in the rebase * add sync task to prevent errors in temp without this you get errors when the temp db is created as it wants to set the sync and analyze jobs but those jobs haven't been started. * Restore and test GET model-index/:model_id * clean up api: get by model id or model-index-id * update e2e tests * ensure correct types on id-ref and value-ref * simplify lookup * More extensive testing * update types * reorder migrations * fix tests * empty to trigger CI * update types * Bump clj-kondo old version reports /work/src/metabase/models/model_index.clj:27:6: error: #'metabase.models.interface/add-created-at-timestamp is private on source: ```clojure (t2/define-before-insert ::created-at-timestamp [instance] #_{:clj-kondo/ignore [:private-call]} ;; <- ignores this override (#'mi/add-created-at-timestamp instance)) ``` * Move task assertions to model namespace the task system is initialized in the repl so these all work locally. But in CI that's not necessarily the case. And rather than mocking the task system again I just leave it in the hands of the other namespace. * Don't serialize ModelIndex and ModelIndexValue seems like it is a setting on an individual instance. No need to have that go across boundaries * Just test this on h2. chasing names across the different dbs is maddening. and the underlying db doesn't really matter * indexes on model_index tables - `model_index.model_id` for searching by model_id - `model_index_value.model_index_id` getting values for a particular index * copy/paste error in indexing error message * `mt/with-temp` -> `t2.with-temp/with-temp` * use `:hook/created-at-timestamped?` * move field-ref normalization into models.interface * nit: alignment * Ensure write access to underlying model for create/delete * Assert more about pk/value refs and better error messages * Don't search indexed entities when user is sandboxed Adds a `= 1 0` clause to indexed-entity search if the user is sandboxed. Also, got tired of the pattern: ```clojure (if-let [segmented-user? (resolve 'metabase-enterprise.sandbox.api.util/segmented-user?)] (if (segmented-user?) :sandboxed-user :not-sandboxed-user) :not-sandboxed-user) ``` and now we have a tasty lil ole' ```clojure (defenterprise segmented-user? metabase-enterprise.sandbox.api.util [] false) ``` that you can just ```clojure (ns foo (:require [metabase.public-settings.premium-features :as premium-features])) (if (premium-features/segmented-user?) :sanboxed :not-sandboxed) ``` * redef premium-features/segmented-user? * trigger CI * Alternative GEOJSON.io fix (#30675) * Alternative GEOJSON.io fix rather than not testing, just accept that we might get an expected location back or we might get nil. This actually represents how the application will behave and will seamlessly work as the service improves or continues throwing errors: ```shell ❯ curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73,185.233.100.23" <html> <head><title>500 Internal Server Error</title></head> <body> <center><h1>500 Internal Server Error</h1></center> <hr><center>openresty</center> </body> </html> ❯ curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73" [{"area_code":"0","organization_name":"GOOGLE","country_code":"US", "country_code3":"USA","continent_code":"NA","ip":"8.8.8.8", "region":"California","latitude":"34.0544","longitude":"-118.2441", "accuracy":5,"timezone":"America\/Los_Angeles","city":"Los Angeles", "organization":"AS15169 GOOGLE","asn":15169,"country":"United States"}, {"area_code":"0","organization_name":"GOOGLE-FIBER","country_code":"US", "country_code3":"USA","continent_code":"NA","ip":"136.49.173.73", "region":"Texas","latitude":"30.2423","longitude":"-97.7672", "accuracy":5,"timezone":"America\/Chicago","city":"Austin", "organization":"AS16591 GOOGLE-FIBER","asn":16591,"country":"United States"}] ``` Changes are basically ```clojure (schema= (s/conditional some? <original-expectation-schema> nil? (s/eq nil) response-from-geojson) ``` * Filter to login error messages ```clojure (into [] (map (fn [[log-level error message]] [log-level (type error) message])) error-messages-captured-in-test) [[:error clojure.lang.ExceptionInfo "Error geocoding IP addresses {:url https://get.geojs.io/v1/ip/geo.json?ip=127.0.0.1}"] [:error clojure.lang.ExceptionInfo "Authentication endpoint error"]] ``` * check timestamps with regex seems like they fixed the service ```shell curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73,185.233.100.23 " [{"country":"United States","latitude":"34.0544","longitude":"-118.2441","accuracy":5,"timezone":"America\/Los_Angeles","ip":"8.8.8.8","organization":"AS15169 GOOGLE","country_code3":"USA","asn":15169,"area_code":"0","organization_name":"GOOGLE","country_code":"US","city":"Los Angeles","continent_code":"NA","region":"California"},{"country":"United States","latitude":"30.2423","longitude":"-97.7672","accuracy":5,"timezone":"America\/Chicago","ip":"136.49.173.73","organization":"AS16591 GOOGLE-FIBER","country_code3":"USA","asn":16591,"area_code":"0","organization_name":"GOOGLE-FIBER","country_code":"US","city":"Austin","continent_code":"NA","region":"Texas"},{"country":"France","latitude":"48.8582","longitude":"2.3387","accuracy":500,"timezone":"Europe\/Paris","ip":"185.233.100.23","organization":"AS198985 AQUILENET","country_code3":"FRA","asn":198985,"area_code":"0","organization_name":"AQUILENET","country_code":"FR","continent_code":"EU"}] ``` whereas a few hours ago that returned a 500. And now the timestamps are different ``` ;; from schema "2021-03-18T19:52:41.808482Z" ;; results "2021-03-18T20:52:41.808482+01:00" ``` kinda sick of dealing with this and don't want to deal with a schema that matches "near" to a timestamp. * Restore these i copied changes over from tim's fix and it included these fixes. But now the service is fixed and these are the original values from cam's original fixes * Separate FK reference because of index ERROR in metabase.db.schema-migrations-test/rollback-test (ChangeLogIterator.java:126) Uncaught exception, not in assertion. liquibase.exception.LiquibaseException: liquibase.exception.RollbackFailedException: liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`] liquibase.exception.RollbackFailedException: liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`] liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`] java.sql.SQLTransientConnectionException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint SQLState: "HY000" errorCode: 1553 * delete cascade on model_index fk to report_card * reorder fk and index migrations * break apart FK from table definition for rollbacks * update types and appease the new lint rules * define models in a toucan2 way * remove checksum: ANY from migrations * api permissions 403 tests * test how we fetch values * remove empty line, add quotes to migration file * cleanup from tamas's comments - indention fix - inline an inc - add values in a tx (also catch errors and update model_index) - clarify docstring on `should-deindex?` - don't use declare, just define fn earlier * update after merge * update to new model name * only test joins for drivers that support it * don't test mongo, include scenario in test output * sets use `disj` not `dissoc` * handle oracle primary id type `[94M "Awesome Bronze Plate"]` (type 94M) -> java.math.BigDecimal * remove magic value of 5000 for threshold * Reorder unique constraint and remove extra index previously had a unique constraint (model_pk, model_index_id) and an index on model_index_id. If we reorder the unique constraint we get the index on the model_index_id for free. So we'll take it. * remove breakout. without breakout ```sql SELECT "source"."id" AS "ID", "source"."title" AS "TITLE" FROM (SELECT "PUBLIC"."products"."id" AS "ID", "PUBLIC"."products"."title" AS "TITLE" FROM "PUBLIC"."products") AS "source" ORDER BY "source"."title" DESC ``` with breakout ```sql SELECT "source"."id" AS "ID", "source"."title" AS "TITLE" FROM (SELECT "PUBLIC"."products"."id" AS "ID", "PUBLIC"."products"."title" AS "TITLE" FROM "PUBLIC"."products") AS "source" GROUP BY "source"."id", "source"."title" ORDER BY "source"."title" DESC, "source"."id" ASC ``` * restore breakout caused some tests with joins to fail * update model-index api mock * Simpler method for indexing Remove the `generation` part. We'll do the set operations in memory rather than relying on db upserts. Now all app-dbs work the same and we get all indexed values, diff against current values from the model, and retract and add the appropriate values. "Updates" are counted as a retraction and then addition rather than separately trying to update a row. Some garbage generation stats: Getting garbage samples with ```clojure (defn tuple-stats [] (let [x (-> (t2/query ["select n_live_tup, n_dead_tup, relname from pg_stat_all_tables where relname = 'model_index_value';"]) (first))] {:live (:n_live_tup x) :dead (:n_dead_tup x)})) ``` And using a simple loop to index the values repeatedly: ```clojure (reduce (fn [stats _] (model-index/add-values! model-index) (conj stats (tuple-stats))) [] (range 20)) ``` With generation style: ```clojure [{:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 400} {:live 200, :dead 400} {:live 200, :dead 400} {:live 200, :dead 400} {:live 200, :dead 770} {:live 200, :dead 770} {:live 200, :dead 787} {:live 200, :dead 787} {:live 200, :dead 825} {:live 200, :dead 825} {:live 200, :dead 825} {:live 200, :dead 825} {:live 200, :dead 818} {:live 200, :dead 818} {:live 200, :dead 818} {:live 200, :dead 818} {:live 200, :dead 854}] ``` With "dump and reload": ```clojure [{:live 200, :dead 0} {:live 200, :dead 200} {:live 200, :dead 200} {:live 200, :dead 600} {:live 200, :dead 600} {:live 200, :dead 600} {:live 200, :dead 800} {:live 200, :dead 800} {:live 200, :dead 800} {:live 200, :dead 800} {:live 200, :dead 800} {:live 200, :dead 1600} {:live 200, :dead 1600} {:live 200, :dead 1600} {:live 200, :dead 2200} {:live 200, :dead 2200} {:live 200, :dead 2200} {:live 200, :dead 2200} {:live 200, :dead 2200} {:live 200, :dead 3600}] ``` And of course now that it's a no-op, ```clojure [{:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0} {:live 200, :dead 0}] ``` Since no db actions have taken when reindexing. We've traded memory set operations for garbage in the database. Since we're capped at 5000 rows it seems fine for now. * add analogs to `isInteger` to constants.cljc and isa.cljc files * clarity and do less work on set conversion * exclude clojure.core/integer? from isa.cljc * Rename `state_change_at` -> `indexed_at` * breakout brings along an order-by and fields clause --------- Co-authored-by:
dan sutton <dan@dpsutton.com>
-
Nemanja Glumac authored
[ci skip]
-
- May 23, 2023
-
-
Ariya Hidayat authored
-
Nemanja Glumac authored
* Introduce stress-test E2E workflow * Explain how to stress-test an E2E flake fix * Disallow test retries in stress-test mode * Increase the job timeout * Re-word the number of times to run the test * Keep labels casing uniform * Address review suggestion * Fix unintentional line break * Multi-line command fix * Format the spec path example as code
-
Ariya Hidayat authored
- Abort if the intended version conflicted with a past release - Bail out if the commit hash isn't part of a release branch
-
Cam Saul authored
Improve `type-of` calculation for arithmetic expressions; fix schema for temporal arithmetic (#30921) * Fix #29946 * Sort namespaces
-
- May 19, 2023
-
-
Nemanja Glumac authored
-
- May 18, 2023
-
-
shaun authored
-
- May 16, 2023
-
-
Roman Abdulmanov authored
-
Denis Berezin authored
-
- May 15, 2023
-
-
Denis Berezin authored
-
Aleksandr Lesnenko authored
* update team.json * do not auto create team labels if not exist * question masters team
-
- May 11, 2023
-
-
Ryan Laurie authored
-
Nemanja Glumac authored
-
- May 08, 2023
-
-
Ariya Hidayat authored
-
- May 03, 2023
-
-
Ryan Laurie authored
-
Ariya Hidayat authored
-
- May 02, 2023
-
-
Aleksandr Lesnenko authored
-
- May 01, 2023
-
-
Aleksandr Lesnenko authored
-