Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Aug 01, 2023
  2. Jul 27, 2023
    • Nemanja Glumac's avatar
      Run E2E tests on Chrome v114 (#32688) · 53450025
      Nemanja Glumac authored
      As of two days ago we started seeing Chrome crashing in E2E tests.
      An example of the failed run: https://github.com/metabase/metabase/actions/runs/5665970958/job/15354475344
      
      ```
      We detected that the Chromium Renderer process just crashed.
      
      This is the equivalent to seeing the 'sad face' when Chrome dies.
      
      This can happen for a number of different reasons:
      
      - You wrote an endless loop and you must fix your own code
      - You are running Docker (there is an easy fix for this: see link below)
      - You are running lots of tests on a memory intense application.
          - Try enabling experimentalMemoryManagement in your config file.
          - Try lowering numTestsKeptInMemory in your config file.
      - You are running in a memory starved VM environment.
          - Try enabling experimentalMemoryManagement in your config file.
          - Try lowering numTestsKeptInMemory in your config file.
      - There are problems with your GPU / GPU drivers
      - There are browser bugs in Chromium
      ```
      
      I'm not quite sure whether this is a bug in Chrome v115 or some interplay between Cypress and this particular Chrome version, but running all tests on older Chrome version v114 seems to be working just fine.
      https://github.com/metabase/metabase/actions/runs/5677579526?pr=32688
      
      Proposal
      Let's merge this ASAP as a hotfix to unblock everyone, and then we'll follow-up when there's a clear resolution of this problem (most likely upstream).
      Unverified
      53450025
  3. Jul 26, 2023
  4. Jul 25, 2023
  5. Jul 11, 2023
    • Nemanja Glumac's avatar
      Overhaul E2E tests token activation logic (#32186) · a54b3dc6
      Nemanja Glumac authored
      
      * Start all tests without a token
      
      * Update tests
      
      * Fix tests in `admin` group
      
      * Fix tests in `dashboard-filters` group
      
      * Fix tests `visualizations` group
      
      * Batch fixes
      
      * Fix database banner snowplow test
      
      * Fix full-app mobile view banner test
      
      * Simplify and remove `isPremiumActive` check
      
      * Fix the helper
      
      * Fix helper to fail fast
      
      * Fix tests
      
      * Ensure token scope is set
      
      * Ensure CYPRESS_ env tokens are present
      
      * Set token only if running Metabase ee
      
      * Improve JSDoc
      
      * Slightly tweak log message
      
      * Update E2E dev guide
      
      * Update cross-version tests
      
      * Remove premium token from Cypress backend setup
      
      * Improve a comment
      
      * Fix test
      
      * Update embedding copy
      
      ---------
      
      Co-authored-by: default avatarAlexander Polyankin <alexander.polyankin@metabase.com>
      Unverified
      a54b3dc6
  6. Jul 10, 2023
  7. Jul 06, 2023
  8. Jul 04, 2023
  9. Jul 03, 2023
    • Nemanja Glumac's avatar
      Fix cronjob frequency for ReplayIO E2E tests (#32091) · 9df654ba
      Nemanja Glumac authored
      The previous setting accidentally executed a scheduled job **every minute**
      on Sunday, when we actually wanted it to run only once on that day.
      
      This PR fixes it and it will now run at 10PM every Sunday.
      https://crontab.guru/#0_22_*_*_0
      
      [ci skip]
      Unverified
      9df654ba
    • Nemanja Glumac's avatar
      Fix release workflow latest Docker tag adjustment (#32041) · 90b9ef00
      Nemanja Glumac authored
      * Fix Docker `latest` tag logic in the release workflow
      
      As noted in #32036, the current logic doesn't account for the existence
      of the N+1 RC tag when the current release is N. In other words, it
      didn't promote `v1.46.6` to the latest Docker tag because it included
      `v1.47.0-RC1` in the list of the relevant tags it compares against.
      
      This PR fixes that and closes #32036.
      
      **Before:**
      ```
      Current version for v1.46.6 is {
        prefix: 'v1',
        feature: 46,
        maintenance: 6,
        build: NaN,
        isOSS: false,
        isEE: true,
        prerelease: false
      }
      
      Enumerating all git tag...
      Found total 414 tags
      
      Filtering for EE
      Found total 116 filtered tags
      
      Sorting tags to find the highest version numbers...
      Showing 20 tags with the highest versions...
           v1.47.0-RC1
           v1.47.0-RC2
      ---> v1.46.6
           v1.46.5
           v1.46.4
           v1.46.3
           v1.46.2
           v1.46.1
           v1.46.0
           v1.46.0-RC1
           v1.46.0-RC2
           v1.46.0-RC3
           v1.46.0-RC4
           v1.45.4
           v1.45.3.1
           v1.45.3
           v1.45.2.1
           v1.45.2
           v1.45.1
           v1.45.0
      
      The latest container image stays as v1.47.0-RC1
      There is no need to tag the v1.46.6 container image as latest.
      ```
      
      **After the fix:**
      ```
      Current version for v1.46.6 is {
        prefix: 'v1',
        feature: 46,
        maintenance: 6,
        build: NaN,
        isOSS: false,
        isEE: true,
        prerelease: false
      }
      
      Enumerating all git tags...
      Found total 430 tags
      
      Filtering for EE which excludes prerelease tags...
      Found total 94 filtered tags
      
      Sorting tags to find the highest version numbers...
      Showing 20 tags with the highest versions...
      ---> v1.46.6
           v1.46.5
           v1.46.4
           v1.46.3
           v1.46.2
           v1.46.1
           v1.46.0
           v1.45.4
           v1.45.3.1
           v1.45.3
           v1.45.2
           v1.45.1
           v1.45.0
           v1.44.7
           v1.44.6.1
           v1.44.6
           v1.44.5
           v1.44.4
           v1.44.3
           v1.44.2
      
      Thus, the container image for v1.46.6 must be marked as latest.
      ```
      
      * Remove unnecessary whitespace
      
      [ci skip]
      Unverified
      90b9ef00
  10. Jun 30, 2023
    • Nemanja Glumac's avatar
      Run E2E tests using Replay browser weekly (#32021) · 7f972ab9
      Nemanja Glumac authored
      Replay is adding a significant overhead and is making every single run
      fail. This is messing with our statistics and analytics.
      
      While the Replay.io team is working on some optimizations that would
      remove the overhead, we can significantly dial down the frequency of
      these runs. We'll switch to running them only once per week, on Sunday.
      
      For now...
      Unverified
      7f972ab9
  11. Jun 29, 2023
    • Nemanja Glumac's avatar
      Exclude ReplayIO scheduled E2E tests from re-run (#31983) · 5d5e790d
      Nemanja Glumac authored
      Running E2E tests using ReplayIO still adds a significant overhead, which
      leads to a lot of failed runs. There's no point in re-running those tests
      for at least two reasons:
      1. they will fail again and we're just wasting CI infrastructure
      2. our slack-failure-alert-bot will spam our Slack with the reports of
      the failed runs that we don't (currently) care about
      
      [ci skip]
      Unverified
      5d5e790d
  12. Jun 28, 2023
  13. Jun 27, 2023
  14. Jun 21, 2023
  15. Jun 20, 2023
  16. Jun 16, 2023
  17. Jun 15, 2023
    • Lena's avatar
      Revamp and split an internal issue template (#31575) · a03a9e5f
      Lena authored
      Previously: one “feature or project” template with the `.Epic` label automatically attached.
      Now: two templates, one for epics, one for smaller things that don't require milestones (or `.Epic` label; they get `.Task` instead).
      
      Squashed commit messages follow:
      
      * Remove unused optional field
      
      * Update title to current convention
      
      * Fix typo
      
      * Remove unused section
      
      * Change capitalization for consistency
      
      * Remove redundant description
      
      * Rename the issue template to mention epic
      
      * Shorten the description of the issue template
      
      * Lighten up the milestones section
      
      Unfortunately, it doesn't seem possible to prepopulate the issue with an
      empty tasklist (https://github.com/orgs/community/discussions/50502).
      
      * Add an issue template for tasks [ci skip]
      
      We had one for epics, with milestones and `.Epic` label. Pretty often
      that label was accidentally left on smaller issues that were created
      using the same template. Now the smaller issues have their own template
      and their own label. Yay.
      
      https://metaboat.slack.com/archives/C04CGFHCJDD/p1686548875239219
      
      * Remove redundant angle brackets
      
      * Use a string to get a valid yaml file
      
      Without the quotes, github says
      `Error in user YAML: (<unknown>): did not find expected key while parsing
      a block mapping at line 1 column 1`
      
      * Remove rarely if ever used placeholders
      
      * Call out optional-ness of some links
      
      * Contextualize a header
      
      Context can be some text, a slack thread, a notion page, another issue,
      whatever.
      Unverified
      a03a9e5f
  18. Jun 14, 2023
    • Nemanja Glumac's avatar
      Refactor E2E tests in the `sharing` folder (#31455) · 4c01a8cf
      Nemanja Glumac authored
      * Extract `questionDetails`
      
      * Move `public-sharing` to the `sharing` folder
      
      * Remove redundant test
      
      This part was covered in `public-sharing.cy.spec.js`.
      
      * Create a separate `public-dashboard` E2E spec
      
      - Extract relevant part from `public.cy.spec.js`
      - Make all tests in `public-dashboard.cy.spec.js` run in isolation
      
      * Make tests in `public.cy.spec.js` run in isolation
      
      * Remove redundant wait
      
      * Limit the query results to speed test up
      
      * Merge public questions E2E tests
      
      * Merge `downloads` with the `sharing` E2E group
      
      * Remove `downloads` from the folder list
      
      * Refactor `public-dashboard.cy.spec.js`
      
      - Clean up linter warnings by using semantic selectors
      - Speed test up by using API
      Unverified
      4c01a8cf
  19. Jun 12, 2023
  20. Jun 08, 2023
  21. Jun 06, 2023
  22. Jun 04, 2023
  23. Jun 02, 2023
    • Nemanja Glumac's avatar
      Pre-build the specific version and edition of MB (#31296) · 37a8b947
      Nemanja Glumac authored
      Simply invoking `.bin/build.sh` will produce the uberjar without a
      version. This is why @ariya created #30915 in the first place.
      
      But the build script already takes `:version` and `:edition` arguments.
      That is exactly what this PR is doing rather than manually unzipping the
      uberjar in order to tweak its `version.properties` file.
      Unverified
      37a8b947
  24. Jun 01, 2023
  25. May 31, 2023
  26. May 30, 2023
  27. May 29, 2023
  28. May 26, 2023
    • Nemanja Glumac's avatar
      [E2E] Cypress Replay.io Integration (#29787) · 5e359e20
      Nemanja Glumac authored
      
      * Install `replay.io` library
      
      * Register `replay.io` in the Cypress config
      
      * Run E2E tests using Replay chromium browser but only in CI
      
      * Upload Replay.io recordings to the dashboard
      
      * Manually install Replay.io browser
      
      * Always upload recordings
      
      * Pass in a custom test run id
      
      * Disable asserts and send replays to a separate team
      
      * Upload OSS recordings as well
      
      * Use specific Ubuntu version
      
      * Record and run Replay.io on `master` only
      
      * Do not toggle CI browsers in the config
      
      * Test run: pass `replay-chromium` browser as CLI flag in a run
      
      * Fix multi-line command
      
      * Use replay plugin conditionally
      
      * Set the flag correctly
      
      * Require node process
      
      * Remove sourcemap vars
      
      * Record using replay.io on schedule
      
      * Explicitly name replay runs
      
      ---------
      
      Co-authored-by: default avatarJaril <jarilvalenciano@gmail.com>
      Unverified
      5e359e20
  29. May 24, 2023
    • Ryan Laurie's avatar
      Indexed Entities (#30487) · 7f51f3c9
      Ryan Laurie authored
      * indexed entities schema
      
      * schema
      
      * endpoint with create and delete
      
      * hooked up tasks
      
      * first tests and hooking it all together
      
      * some e2e tests
      
      * search
      
      * search
      
      had to rename value -> name. The search stuff works well for stuff that
      looks the same. And previously was renaming `:value` to name in the
      search result, but it uses the searchable-columns-for-model for which
      values to score on and wasn't finding :value. So easier to just let name
      flow through.
      
      * include model_pk in results
      
      * send pk back as id, include pk_ref
      
      * rename fk constraint (had copied it)
      
      * populate indexed values with name not value
      
      * GET http://localhost:3000/api/model_index?model_id=135
      
      * populate indexed values on post
      
      done here for ease of demo. followup work will schedule the task
      immediately, but for now, do it on thread and return info
      
      * test search results
      
      * fix t2 delete syntax on h2
      
      * fix h2
      
      * tests work on h2 and postgres app dbs
      
      * Fix after insert method
      
      after method lets you change the data. and i was returning the results
      of the refresh job as the model-index. not great
      
      * unify some cross-db logic for inserting model index values
      
      * Extract shared logic in populating indexed values; mysql support
      
      * clean ns
      
      * I was the bug (endpoint returns the item now)
      
      * fix delete to check perms on model not model-index
      
      * fix tests
      
      * Fix tests
      
      * ignore unused private var
      
      * Add search tests
      
      * remove tap>
      
      * Tests for simple mbql, joined mbql, native
      
      * shutdown scheduler after finishing
      
      * Entity Search + Indexing Frontend (#30082)
      
      * add model index types and mocks
      
      * add integer type check
      
      * add new entities for model indexes
      
      * expose model index toggle in model metadata
      
      * add search description for indexed entities
      
      * add an error boundary to the app bar
      
      * e2e test entity indexes
      
      * update tests and types
      
      * first tests and hooking it all together
      
      * Renumber migrations
      
      i forgot to claim them and i am a bad person
      
      * Restore changes lost in the rebase
      
      * add sync task to prevent errors in temp
      
      without this you get errors when the temp db is created as it wants to
      set the sync and analyze jobs but those jobs haven't been started.
      
      * Restore and test GET model-index/:model_id
      
      * clean up api: get by model id or model-index-id
      
      * update e2e tests
      
      * ensure correct types on id-ref and value-ref
      
      * simplify lookup
      
      * More extensive testing
      
      * update types
      
      * reorder migrations
      
      * fix tests
      
      * empty to trigger CI
      
      * update types
      
      * Bump clj-kondo
      
      old version reports
      
      /work/src/metabase/models/model_index.clj:27:6: error: #'metabase.models.interface/add-created-at-timestamp is private
      
      on source:
      
      ```clojure
      (t2/define-before-insert ::created-at-timestamp
        [instance]
        #_{:clj-kondo/ignore [:private-call]} ;; <- ignores this override
        (#'mi/add-created-at-timestamp instance))
      ```
      
      * Move task assertions to model namespace
      
      the task system is initialized in the repl so these all work
      locally. But in CI that's not necessarily the case. And rather than
      mocking the task system again I just leave it in the hands of the other
      namespace.
      
      * Don't serialize ModelIndex and ModelIndexValue
      
      seems like it is a setting on an individual instance. No need to have
      that go across boundaries
      
      * Just test this on h2.
      
      chasing names across the different dbs is maddening. and the underlying
      db doesn't really matter
      
      * indexes on model_index tables
      
      - `model_index.model_id` for searching by model_id
      - `model_index_value.model_index_id` getting values for a particular
        index
      
      * copy/paste error in indexing error message
      
      * `mt/with-temp` -> `t2.with-temp/with-temp`
      
      * use `:hook/created-at-timestamped?`
      
      * move field-ref normalization into models.interface
      
      * nit: alignment
      
      * Ensure write access to underlying model for create/delete
      
      * Assert more about pk/value refs and better error messages
      
      * Don't search indexed entities when user is sandboxed
      
      Adds a `= 1 0` clause to indexed-entity search if the user is
      sandboxed.
      
      Also, got tired of the pattern:
      
      ```clojure
      (if-let [segmented-user? (resolve 'metabase-enterprise.sandbox.api.util/segmented-user?)]
        (if (segmented-user?)
          :sandboxed-user
          :not-sandboxed-user)
        :not-sandboxed-user)
      ```
      
      and now we have a tasty lil ole'
      
      ```clojure
      (defenterprise segmented-user?
        metabase-enterprise.sandbox.api.util
        []
        false)
      ```
      
      that you can just
      
      ```clojure
      (ns foo
        (:require
         [metabase.public-settings.premium-features :as premium-features]))
      
      (if (premium-features/segmented-user?)
        :sanboxed
        :not-sandboxed)
      ```
      
      * redef premium-features/segmented-user?
      
      * trigger CI
      
      * Alternative GEOJSON.io fix (#30675)
      
      * Alternative GEOJSON.io fix
      
      rather than not testing, just accept that we might get an expected
      location back or we might get nil. This actually represents how the
      application will behave and will seamlessly work as the service improves
      or continues throwing errors:
      
      ```shell
      ❯ curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73,185.233.100.23"
      <html>
      <head><title>500 Internal Server Error</title></head>
      <body>
      <center><h1>500 Internal Server Error</h1></center>
      <hr><center>openresty</center>
      </body>
      </html>
      
      ❯ curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73"
      [{"area_code":"0","organization_name":"GOOGLE","country_code":"US",
        "country_code3":"USA","continent_code":"NA","ip":"8.8.8.8",
        "region":"California","latitude":"34.0544","longitude":"-118.2441",
        "accuracy":5,"timezone":"America\/Los_Angeles","city":"Los Angeles",
        "organization":"AS15169 GOOGLE","asn":15169,"country":"United States"},
       {"area_code":"0","organization_name":"GOOGLE-FIBER","country_code":"US",
       "country_code3":"USA","continent_code":"NA","ip":"136.49.173.73",
       "region":"Texas","latitude":"30.2423","longitude":"-97.7672",
       "accuracy":5,"timezone":"America\/Chicago","city":"Austin",
       "organization":"AS16591 GOOGLE-FIBER","asn":16591,"country":"United States"}]
      ```
      
      Changes are basically
      
      ```clojure
      (schema= (s/conditional some? <original-expectation-schema>
                              nil?  (s/eq nil)
               response-from-geojson)
      ```
      
      * Filter to login error messages
      
      ```clojure
      (into [] (map (fn [[log-level error message]] [log-level (type error) message]))
            error-messages-captured-in-test)
      [[:error
        clojure.lang.ExceptionInfo
        "Error geocoding IP addresses {:url https://get.geojs.io/v1/ip/geo.json?ip=127.0.0.1}"]
       [:error clojure.lang.ExceptionInfo "Authentication endpoint error"]]
       ```
      
      * check timestamps with regex
      
      seems like they fixed the service
      
      ```shell
      curl "https://get.geojs.io/v1/ip/geo.json?ip=8.8.8.8,136.49.173.73,185.233.100.23
      
      "
      [{"country":"United States","latitude":"34.0544","longitude":"-118.2441","accuracy":5,"timezone":"America\/Los_Angeles","ip":"8.8.8.8","organization":"AS15169 GOOGLE","country_code3":"USA","asn":15169,"area_code":"0","organization_name":"GOOGLE","country_code":"US","city":"Los Angeles","continent_code":"NA","region":"California"},{"country":"United States","latitude":"30.2423","longitude":"-97.7672","accuracy":5,"timezone":"America\/Chicago","ip":"136.49.173.73","organization":"AS16591 GOOGLE-FIBER","country_code3":"USA","asn":16591,"area_code":"0","organization_name":"GOOGLE-FIBER","country_code":"US","city":"Austin","continent_code":"NA","region":"Texas"},{"country":"France","latitude":"48.8582","longitude":"2.3387","accuracy":500,"timezone":"Europe\/Paris","ip":"185.233.100.23","organization":"AS198985 AQUILENET","country_code3":"FRA","asn":198985,"area_code":"0","organization_name":"AQUILENET","country_code":"FR","continent_code":"EU"}]
      ```
      
      whereas a few hours ago that returned a 500. And now the timestamps are different
      
      ```
      ;; from schema
      "2021-03-18T19:52:41.808482Z"
      ;; results
      "2021-03-18T20:52:41.808482+01:00"
      ```
      
      kinda sick of dealing with this and don't want to deal with a schema
      that matches "near" to a timestamp.
      
      * Restore these
      
      i copied changes over from tim's fix and it included these fixes. But
      now the service is fixed and these are the original values from cam's
      original fixes
      
      * Separate FK reference because of index
      
      ERROR in metabase.db.schema-migrations-test/rollback-test (ChangeLogIterator.java:126)
      Uncaught exception, not in assertion.
      
         liquibase.exception.LiquibaseException: liquibase.exception.RollbackFailedException: liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`]
         liquibase.exception.RollbackFailedException: liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`]
         liquibase.exception.DatabaseException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint [Failed SQL: (1553) DROP INDEX `idx_model_index_model_id` ON `schema-migrations-test-db-32952`.`model_index`]
         java.sql.SQLTransientConnectionException: (conn=637) Cannot drop index 'idx_model_index_model_id': needed in a foreign key constraint
           SQLState: "HY000"
          errorCode: 1553
      
      * delete cascade on model_index fk to report_card
      
      * reorder fk and index migrations
      
      * break apart FK from table definition for rollbacks
      
      * update types and appease the new lint rules
      
      * define models in a toucan2 way
      
      * remove checksum: ANY from migrations
      
      * api permissions 403 tests
      
      * test how we fetch values
      
      * remove empty line, add quotes to migration file
      
      * cleanup from tamas's comments
      
      - indention fix
      - inline an inc
      - add values in a tx (also catch errors and update model_index)
      - clarify docstring on `should-deindex?`
      - don't use declare, just define fn earlier
      
      * update after merge
      
      * update to new model name
      
      * only test joins for drivers that support it
      
      * don't test mongo, include scenario in test output
      
      * sets use `disj` not `dissoc`
      
      * handle oracle primary id type
      
      `[94M "Awesome Bronze Plate"]`
      (type 94M) -> java.math.BigDecimal
      
      * remove magic value of 5000 for threshold
      
      * Reorder unique constraint and remove extra index
      
      previously had a unique constraint (model_pk, model_index_id) and an
      index on model_index_id. If we reorder the unique constraint we get the
      index on the model_index_id for free. So we'll take it.
      
      * remove breakout.
      
      without breakout
      
      ```sql
       SELECT "source"."id"    AS "ID",
             "source"."title" AS "TITLE"
      FROM   (SELECT "PUBLIC"."products"."id"    AS "ID",
                     "PUBLIC"."products"."title" AS "TITLE"
              FROM   "PUBLIC"."products") AS "source"
      ORDER  BY "source"."title" DESC
      ```
      
      with breakout
      
      ```sql
       SELECT "source"."id"    AS "ID",
             "source"."title" AS "TITLE"
      FROM   (SELECT "PUBLIC"."products"."id"    AS "ID",
                     "PUBLIC"."products"."title" AS "TITLE"
              FROM   "PUBLIC"."products") AS "source"
      GROUP  BY "source"."id",
                "source"."title"
      ORDER  BY "source"."title" DESC,
                "source"."id" ASC
      ```
      
      * restore breakout
      
      caused some tests with joins to fail
      
      * update model-index api mock
      
      * Simpler method for indexing
      
      Remove the `generation` part. We'll do the set operations in memory
      rather than relying on db upserts. Now all app-dbs work the same and we
      get all indexed values, diff against current values from the model, and
      retract and add the appropriate values. "Updates" are counted as a
      retraction and then addition rather than separately trying to update a
      row.
      
      Some garbage generation stats:
      
      Getting garbage samples with
      
      ```clojure
      (defn tuple-stats
        []
        (let [x (-> (t2/query ["select n_live_tup, n_dead_tup, relname from pg_stat_all_tables where relname = 'model_index_value';"])
                    (first))]
          {:live (:n_live_tup x)
           :dead (:n_dead_tup x)}))
      ```
      
      And using a simple loop to index the values repeatedly:
      
      ```clojure
      (reduce (fn [stats _]
                (model-index/add-values! model-index)
                (conj stats (tuple-stats)))
              []
              (range 20))
      ```
      
      With generation style:
      
      ```clojure
      [{:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 400}
       {:live 200, :dead 400}
       {:live 200, :dead 400}
       {:live 200, :dead 400}
       {:live 200, :dead 770}
       {:live 200, :dead 770}
       {:live 200, :dead 787}
       {:live 200, :dead 787}
       {:live 200, :dead 825}
       {:live 200, :dead 825}
       {:live 200, :dead 825}
       {:live 200, :dead 825}
       {:live 200, :dead 818}
       {:live 200, :dead 818}
       {:live 200, :dead 818}
       {:live 200, :dead 818}
       {:live 200, :dead 854}]
      ```
      
      With "dump and reload":
      
      ```clojure
      [{:live 200, :dead 0}
       {:live 200, :dead 200}
       {:live 200, :dead 200}
       {:live 200, :dead 600}
       {:live 200, :dead 600}
       {:live 200, :dead 600}
       {:live 200, :dead 800}
       {:live 200, :dead 800}
       {:live 200, :dead 800}
       {:live 200, :dead 800}
       {:live 200, :dead 800}
       {:live 200, :dead 1600}
       {:live 200, :dead 1600}
       {:live 200, :dead 1600}
       {:live 200, :dead 2200}
       {:live 200, :dead 2200}
       {:live 200, :dead 2200}
       {:live 200, :dead 2200}
       {:live 200, :dead 2200}
       {:live 200, :dead 3600}]
      ```
      
      And of course now that it's a no-op,
      
      ```clojure
      [{:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}
       {:live 200, :dead 0}]
      ```
      
      Since no db actions have taken when reindexing. We've traded memory set
      operations for garbage in the database. Since we're capped at 5000 rows
      it seems fine for now.
      
      * add analogs to `isInteger` to constants.cljc and isa.cljc files
      
      * clarity and do less work on set conversion
      
      * exclude clojure.core/integer? from isa.cljc
      
      * Rename `state_change_at` -> `indexed_at`
      
      * breakout brings along an order-by and fields clause
      
      ---------
      
      Co-authored-by: default avatardan sutton <dan@dpsutton.com>
      Unverified
      7f51f3c9
    • Nemanja Glumac's avatar
      Fix typo in the pre-release script (#30974) · 183bf4f5
      Nemanja Glumac authored
      [ci skip]
      Unverified
      183bf4f5
  30. May 23, 2023
Loading