Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. May 26, 2023
  2. May 25, 2023
  3. May 24, 2023
    • Noah Moss's avatar
      Fix enterprise model loading (#30969) · 26eac5e8
      Noah Moss authored
      
      * fix enterprise model loading
      
      * docstrings and remove tap
      
      * bump ci
      
      * address comment
      
      * Update src/metabase/models.clj
      
      Co-authored-by: default avatarmetamben <103100869+metamben@users.noreply.github.com>
      
      ---------
      
      Co-authored-by: default avatarmetamben <103100869+metamben@users.noreply.github.com>
      Unverified
      26eac5e8
    • Ryan Laurie's avatar
      Add Icon for entity search (#30491) · a5b41362
      Ryan Laurie authored
      * add index icon
      
      * scale up icon
      Unverified
      a5b41362
    • Uladzimir Havenchyk's avatar
      Fix tsconfig errors in vscode (#30400) · e1f1ebc4
      Uladzimir Havenchyk authored
      
      * Fix tsconfig errors
      
      * fixup! Fix tsconfig errors
      
      ---------
      
      Co-authored-by: default avatarJesse Devaney <22608765+JesseSDevaney@users.noreply.github.com>
      Unverified
      e1f1ebc4
    • Jeff Bruemmer's avatar
      docs - serialization v2 (#30914) · c427af6c
      Jeff Bruemmer authored
      Unverified
      c427af6c
    • Ryan Laurie's avatar
      Indexed Entities (#30487) · 7f51f3c9
      Ryan Laurie authored
      * indexed entities schema
      
      * schema
      
      * endpoint with create and delete
      
      * hooked up tasks
      
      * first tests and hooking it all together
      
      * some e2e tests
      
      * search
      
      * search
      
      had to rename value -> name. The search stuff works well for stuff that
      looks the same. And previously was renaming `:value` to name in the
      search result, but it uses the searchable-columns-for-model for which
      values to score on and wasn't finding :value. So easier to just let name
      flow through.
      
      * include model_pk in results
      
      * send pk back as id, include pk_ref
      
      * rename fk constraint (had copied it)
      
      * populate indexed values with name not value
      
      * GET http://localhost:3000/api/model_index?model_id=135
      
      * populate indexed values on post
      
      done here for ease of demo. followup work will schedule the task
      immediately, but for now, do it on thread and return info
      
      * test search results
      
      * fix t2 delete syntax on h2
      
      * fix h2
      
      * tests work on h2 and postgres app dbs
      
      * Fix after insert method
      
      after method lets you change the data. and i was returning the results
      of the refresh job as the model-index. not great
      
      * unify some cross-db logic for inserting model index values
      
      * Extract shared logic in populating indexed values; mysql support
      
      * clean ns
      
      * I was the bug (endpoint returns the item now)
      
      * fix delete to check perms on model not model-index
      
      * fix tests
      
      * Fix tests
      
      * ignore unused private var
      
      * Add search tests
      
      * remove tap>
      
      * Tests for simple mbql, joined mbql, native
      
      * shutdown scheduler after finishing
      
      * Entity Search + Indexing Frontend (#30082)
      
      * add model index types and mocks
      
      * add integer type check
      
      * add new entities for model indexes
      
      * expose model index toggle in model metadata
      
      * add search description for indexed entities
      
      * add an error boundary to the app bar
      
      * e2e test entity indexes
      
      * update tests and types
      
      * first tests and hooking it all together
      
      * Renumber migrations
      
      i forgot to claim them and i am a bad person
      
      * Restore changes lost in the rebase
      
      * add sync task to prevent errors in temp
      
      without this you get errors when the temp db is created as it wants to
      set the sync and analyze jobs but those jobs haven't been started.
      
      * Restore and test GET model-index/:model_id
      
      * clean up api: get by model id or model-index-id
      
      * update e2e tests
      
      * ensure correct types on id-ref and value-ref
      
      * simplify lookup
      
      * More extensive testing
      
      * update types
      
      * reorder migrations
      
      * fix tests
      
      * empty to trigger CI
      
      * update types
      
      * Bump clj-kondo
      
      old version reports
      
      /work/src/metabase/models/model_index.clj:27:6: error: #'metabase.models.interface/add-created-at-timestamp is private
      
      on source:
      
      ```clojure
      (t2/define-before-insert ::created-at-timestamp
        [instance]
        #_{:clj-kondo/ignore [:private-call]} ;; <- ignores this override
        (#'mi/add-created-at-timestamp instance))
      ```
      
      * Move task assertions to model namespace
      
      the task system is initialized in the repl so these all work
      locally. But in CI that's not necessarily the case. And rather than
      mocking the task system again I just leave it in the hands of the other
      namespace.
      
      * Don't serialize ModelIndex and ModelIndexValue
      
      seems like it is a setting on an individual instance. No need to have
      that go across boundaries
      
      * Just test this on h2.
      
      chasing names across the different dbs is maddening. and the underlying
      db doesn't really matter
      
      * indexes on model_index tables
      
      - `model_index.model_id` for searching by model_id
      - `model_index_value.model_index_id` getting values for a particular
        index
      
      * copy/paste error in indexing error message
      
      * `mt/with-temp` -> `t2.with-temp/with-temp`
      
      * use `:hook/created-at-timestamped?`
      
      * move field-ref normalization into models.interface
      
      * nit: alignment
      
      * Ensure write access to underlying model for create/delete
      
      * Assert more about pk/value refs and better error messages
      
      * Don't search indexed entities when user is sandboxed
      
      Adds a `= 1 0` clause to indexed-entity search if the user is
      sandboxed.
      
      Also, got tired of the pattern:
      
      ```clojure
      (if-let [segmented-user? (resolve 'metabase-enterprise.sandbox.api.util/segmented-user?)]
        (if (segmented-user?)
          :sandboxed-user
          :not-sandboxed-user)
        :not-sandboxed-user)
      ```
      
      and now we have a tasty lil ole'
      
      ```clojure
      (defenterprise segmented-user?
        metabase-enterprise.sandbox.api.util
        []
        false)
      ```
      
      that you can just
      
      ```clojure
      (ns foo
        (:require
         [metabase.public-settings.premium-features :as premium-features]))
      
      (if (premium-features/segmented-user?)
        :sanboxed
        :not-sandboxed)
      ```
      
      * redef premium-features/segmented-user?
      
      * trigger CI
      
      * Alternative GEOJSON.io fix (#30675)
      
      * Alternative GEOJSON.io fix
      
      rather than not testing, just accept that we might get an expected
      location back or we might get nil. This actually represents how the
      application will behave and will seamlessly work as the service improves
      or continues throwing errors:
      
      ```shell
      ❯ curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73,185.233.100.23"
      <html>
      <head><title>500 Internal Server Error</title></head>
      <body>
      <center><h1>500 Internal Server Error</h1></center>
      <hr><center>openresty</center>
      </body>
      </html>
      
      ❯ curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73"
      [{"area_code":"0","organization_name":"GOOGLE","country_code":"US",
        "country_code3":"USA","continent_code":"NA","ip":"8.8.8.8",
        "region":"California","latitude":"34.0544","longitude":"-118.2441",
        "accuracy":5,"timezone":"America\/Los_Angeles","city":"Los Angeles",
        "organization":"AS15169 GOOGLE","asn":15169,"country":"United States"},
       {"area_code":"0","organization_name":"GOOGLE-FIBER","country_code":"US",
       "country_code3":"USA","continent_code":"NA","ip":"136.49.173.73",
       "region":"Texas","latitude":"30.2423","longitude":"-97.7672",
       "accuracy":5,"timezone":"America\/Chicago","city":"Austin",
       "organization":"AS16591 GOOGLE-FIBER","asn":16591,"country":"United States"}]
      ```
      
      Changes are basically
      
      ```clojure
      (schema= (s/conditional some? <original-expectation-schema>
                              nil?  (s/eq nil)
               response-from-geojson)
      ```
      
      * Filter to login error messages
      
      ```clojure
      (into [] (map (fn [[log-level error message]] [log-level (type error) message]))
            error-messages-captured-in-test)
      [[:error
        clojure.lang.ExceptionInfo
        "Error geocoding IP addresses {:url https://get.geojs.io/v1/ip/geo.json?ip=127.0.0.1}"]
       [:error clojure.lang.ExceptionInfo "Authentication endpoint error"]]
       ```
      
      * check timestamps with regex
      
      seems like they fixed the service
      
      ```shell
      curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73,185.233.100.23
      
      "
      [{"country":"United States","latitude":"34.0544","longitude":"-118.2441","accuracy":5,"timezone":"America\/Los_Angeles","ip":"8.8.8.8","organization":"AS15169 GOOGLE","country_code3":"USA","asn":15169,"area_code":"0","organization_name":"GOOGLE","country_code":"US","city":"Los Angeles","continent_code":"NA","region":"California"},{"country":"United States","latitude":"30.2423","longitude":"-97.7672","accuracy":5,"timezone":"America\/Chicago","ip":"136.49.173.73","organization":"AS16591 GOOGLE-FIBER","country_code3":"USA","asn":16591,"area_code":"0","organization_name":"GOOGLE-FIBER","country_code":"US","city":"Austin","continent_code":"NA","region":"Texas"},{"country":"France","latitude":"48.8582","longitude":"2.3387","accuracy":500,"timezone":"Europe\/Paris","ip":"185.233.100.23","organization":"AS198985 AQUILENET","country_code3":"FRA","asn":198985,"area_code":"0","organization_name":"AQUILENET","country_code":"FR","continent_code":"EU"}]
      ```
      
      whereas a few hours ago that returned a 500. And now the timestamps are different
      
      ```
      ;; from schema
      "2021-03-18T19:52:41.808482Z"
      ;; results
      "2021-03-18T20:52:41.808482+01:00"
      ```
      
      kinda sick of dealing with this and don't want to deal with a schema
      that matches "near" to a timestamp.
      
      * Restore these
      
      i copied changes over from tim's fix and it included these fixes. But
      now the service is fixed and these are the original values from cam's
      original fixes
      
      * Separate FK reference because of index
      
      ERROR in metabase.db.schema-migrations-test/rollback-test (ChangeLogIterator.java:126)
      Uncaught exception, not in assertion.
      
         liquibase.exception.LiquibaseException: liquibase.exception.RollbackFailedException: liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`]
         liquibase.exception.RollbackFailedException: liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`]
         liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`]
         java.sql.SQLTransientConnectionException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint
           SQLState: "HY000"
          errorCode: 1553
      
      * delete cascade on model_index fk to report_card
      
      * reorder fk and index migrations
      
      * break apart FK from table definition for rollbacks
      
      * update types and appease the new lint rules
      
      * define models in a toucan2 way
      
      * remove checksum: ANY from migrations
      
      * api permissions 403 tests
      
      * test how we fetch values
      
      * remove empty line, add quotes to migration file
      
      * cleanup from tamas's comments
      
      - indention fix
      - inline an inc
      - add values in a tx (also catch errors and update model_index)
      - clarify docstring on `should-deindex?`
      - don't use declare, just define fn earlier
      
      * update after merge
      
      * update to new model name
      
      * only test joins for drivers that support it
      
      * don't test mongo, include scenario in test output
      
      * sets use `disj` not `dissoc`
      
      * handle oracle primary id type
      
      `[94M "Awesome Bronze Plate"]`
      (type 94M) -> java.math.BigDecimal
      
      * remove magic value of 5000 for threshold
      
      * Reorder unique constraint and remove extra index
      
      previously had a unique constraint (model_pk, model_index_id) and an
      index on model_index_id. If we reorder the unique constraint we get the
      index on the model_index_id for free. So we'll take it.
      
      * remove breakout.
      
      without breakout
      
      ```sql
       SELECT "source"."id"    AS "ID",
             "source"."title" AS "TITLE"
      FROM   (SELECT "PUBLIC"."products"."id"    AS "ID",
                     "PUBLIC"."products"."title" AS "TITLE"
              FROM   "PUBLIC"."products") AS "source"
      ORDER  BY "source"."title" DESC
      ```
      
      with breakout
      
      ```sql
       SELECT "source"."id"    AS "ID",
             "source"."title" AS "TITLE"
      FROM   (SELECT "PUBLIC"."products"."id"    AS "ID",
                     "PUBLIC"."products"."title" AS "TITLE"
              FROM   "PUBLIC"."products") AS "source"
      GROUP  BY "source"."id",
                "source"."title"
      ORDER  BY "source"."title" DESC,
                "source"."id" ASC
      ```
      
      * restore breakout
      
      caused some tests with joins to fail
      
      * update model-index api mock
      
      * Simpler method for indexing
      
      Remove the `generation` part. We'll do the set operations in memory
      rather than relying on db upserts. Now all app-dbs work the same and we
      get all indexed values, diff against current values from the model, and
      retract and add the appropriate values. "Updates" are counted as a
      retraction and then addition rather than separately trying to update a
      row.
      
      Some garbage generation stats:
      
      Getting garbage samples with
      
      ```clojure
      (defn tuple-stats
        []
        (let [x (-> (t2/query ["select n_live_tup, n_dead_tup, relname from pg_stat_all_tables where relname = 'model_index_value';"])
                    (first))]
          {:live (:n_live_tup x)
           :dead (:n_dead_tup x)}))
      ```
      
      And using a simple loop to index the values repeatedly:
      
      ```clojure
      (reduce (fn [stats _]
                (model-index/add-values! model-index)
                (conj stats (tuple-stats)))
              []
              (range 20))
      ```
      
      With generation style:
      
      ```clojure
      [{:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 400}
       {:live 200, :dead 400}
       {:live 200, :dead 400}
       {:live 200, :dead 400}
       {:live 200, :dead 770}
       {:live 200, :dead 770}
       {:live 200, :dead 787}
       {:live 200, :dead 787}
       {:live 200, :dead 825}
       {:live 200, :dead 825}
       {:live 200, :dead 825}
       {:live 200, :dead 825}
       {:live 200, :dead 818}
       {:live 200, :dead 818}
       {:live 200, :dead 818}
       {:live 200, :dead 818}
       {:live 200, :dead 854}]
      ```
      
      With "dump and reload":
      
      ```clojure
      [{:live 200, :dead 0}
       {:live 200, :dead 200}
       {:live 200, :dead 200}
       {:live 200, :dead 600}
       {:live 200, :dead 600}
       {:live 200, :dead 600}
       {:live 200, :dead 800}
       {:live 200, :dead 800}
       {:live 200, :dead 800}
       {:live 200, :dead 800}
       {:live 200, :dead 800}
       {:live 200, :dead 1600}
       {:live 200, :dead 1600}
       {:live 200, :dead 1600}
       {:live 200, :dead 2200}
       {:live 200, :dead 2200}
       {:live 200, :dead 2200}
       {:live 200, :dead 2200}
       {:live 200, :dead 2200}
       {:live 200, :dead 3600}]
      ```
      
      And of course now that it's a no-op,
      
      ```clojure
      [{:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}]
      ```
      
      Since no db actions have taken when reindexing. We've traded memory set
      operations for garbage in the database. Since we're capped at 5000 rows
      it seems fine for now.
      
      * add analogs to `isInteger` to constants.cljc and isa.cljc files
      
      * clarity and do less work on set conversion
      
      * exclude clojure.core/integer? from isa.cljc
      
      * Rename `state_change_at` -> `indexed_at`
      
      * breakout brings along an order-by and fields clause
      
      ---------
      
      Co-authored-by: default avatardan sutton <dan@dpsutton.com>
      Unverified
      7f51f3c9
    • Jeff Bruemmer's avatar
      docs - update API docs (#30992) · d7c7d378
      Jeff Bruemmer authored
      Unverified
      d7c7d378
    • Jeff Bruemmer's avatar
      update release list (#30995) · 79c6d641
      Jeff Bruemmer authored
      Unverified
      79c6d641
    • Cal Herries's avatar
      Fix column settings being duplicated when fields have the same `id-or-name`,... · b0c4d992
      Cal Herries authored
      Fix column settings being duplicated when fields have the same `id-or-name`, but different `join-alias` (#27487)
      
      Unverified
      b0c4d992
    • Jeff Bruemmer's avatar
      max retention env var (#30943) · 13b1644c
      Jeff Bruemmer authored
      Unverified
      13b1644c
    • José María González Calabozo's avatar
      Timestamp in milliseconds (#24721) · cc73fe51
      José María González Calabozo authored
      This patch allows Metabase to cast timestamp in milliseconds to datetime
      Unverified
      cc73fe51
    • Alexander Polyankin's avatar
    • Ryan Laurie's avatar
      Embedded link cards (#30917) · 3412ac29
      Ryan Laurie authored
      * open metabase links in same iFrame when embedding
      
      * unit test link behavior
      Unverified
      3412ac29
    • Jeff Bruemmer's avatar
    • Nick Fitzpatrick's avatar
      Add onDisplayUpdate function to visualizations to update settings (#30773) · 277bd65b
      Nick Fitzpatrick authored
      * Add onDisplayUpdate function to visualizations to update settings
      
      * Moving update settings to ChartTypeSidebar
      
      * updating test name
      Unverified
      277bd65b
    • Case Nelson's avatar
      [MLv2] Use the fk field name rather than the table name for long display name... · a86c3bec
      Case Nelson authored
      [MLv2] Use the fk field name rather than the table name for long display name on implicit joins (#30961)
      
      * [MLv2] Use the fk field name rather than the table name for long display name on implicit joins
      
      * Fix tests
      
      * Fix FE test
      Unverified
      a86c3bec
    • Jeff Bruemmer's avatar
      docs - add note about env var command (#30907) · edcb1f44
      Jeff Bruemmer authored
      * add note about env var command
      
      * add link to docs.
      Unverified
      edcb1f44
    • Alexander Polyankin's avatar
    • Nemanja Glumac's avatar
      Fix typo in the pre-release script (#30974) · 183bf4f5
      Nemanja Glumac authored
      [ci skip]
      Unverified
      183bf4f5
    • Ikko Eltociear Ashimine's avatar
      Fix typo in js.clj (#30847) · 0d682d8c
      Ikko Eltociear Ashimine authored
      betwen -> between
      
      [ci skip]
      Unverified
      0d682d8c
    • Roman Abdulmanov's avatar
      Fixed MB_JETTY_PORT env var (#30954) · a91d1e37
      Roman Abdulmanov authored
      Unverified
      a91d1e37
  4. May 23, 2023
Loading