Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Apr 06, 2022
    • Ryan Laurie's avatar
      Add currency symbols to inputs (#21398) · 34378b8d
      Ryan Laurie authored
      * add correct currency symbol to filter inputs
      
      * show currency symbol in between inputs
      
      * lift currency prefix logic up component tree
      
      * remove unused function
      
      * fix existing tests
      
      * test currency prefixes
      
      * fix test nesting
      
      * get currency symbols for columns from our currency map
      
      * use visualization settings for currency inputs
      
      * use keyForColumn() to get visualization settings
      Unverified
      34378b8d
  2. Mar 23, 2022
  3. Mar 16, 2022
  4. Mar 10, 2022
  5. Feb 28, 2022
  6. Feb 23, 2022
  7. Feb 14, 2022
    • Cam Saul's avatar
      Add logic to truncate and uniquely-suffix column alias identifiers (#19659) · f94d5149
      Cam Saul authored
      * Add failing test for #15978
      
      * Improved test
      
      * Add new metabase.driver.query-processor.escape-join-aliases QP middleware
      
      * Test fix :wrench:
      
      * Add reference to #20307
      
      * Add some extra dox
      
      * Test fixes for BigQuery drivers
      
      * revert unneeded change
      
      * Fix :bigquery and :bigquery-cloud-sdk mixup
      
      * Test fixes :wrench:
      
      * Test fix :wrench:
      
      * Remove comment I meant to remove
      Unverified
      f94d5149
  8. Feb 10, 2022
    • Michiel Borkent's avatar
      Linting improvements (#20424) · 8f119d64
      Michiel Borkent authored
      * honeysql var improvements
      
      * Define routes
      
      * Define routes
      
      * define-routes improvement
      
      * honeysql helpers
      
      * honesql
      
      * abs in clojure 1.11
      
      * fix logic unresolved vars
      
      * deprecated
      
      * deprecated var warnings
      
      * undo replace
      Unverified
      8f119d64
  9. Feb 08, 2022
    • Cam Saul's avatar
      Rework how the remappings middleware matches remapped to/from columns for... · e35ecaca
      Cam Saul authored
      Rework how the remappings middleware matches remapped to/from columns for explicit external (FK) remaps (#20009)
      
      * Fix #9236 [WIP]
      
      * Revert unneeded changes
      
      * Only merge namespaced :options into result info
      
      * Revert unneeded changes
      
      * Everything is working :scream_cat:
      
      * Revert unneeded change
      
      * Clean namespaces
      
      * Add `:test` profile to `namespace-checker` to suppress log messages.
      
      * PR feedback
      
      * Fix namespaces
      Unverified
      e35ecaca
  10. Feb 02, 2022
  11. Feb 01, 2022
    • dpsutton's avatar
      Unused bindings part1 (#19995) · dd300077
      dpsutton authored
      * Unused bindings cleanup
      
      `clj-kondo --lint src:shared/src > lint` and then just work my way
      through it. Note that there are warnings about deprecations that we will
      need to get rid of as well. Most likely by turning off that warning
      until we are ready to tackle the project.
      
      * More unused and a sort ns
      
      * Removing unused svg helpers that eastwood is complaining about
      
      * Restore accidentally deleted `begin!` impl for xlsx
      
      * clean ns
      
      * clean ns
      
      * Last few unused bindings
      
      * kondo checks for empty docstrings
      
      * Little more cleanup
      Unverified
      dd300077
    • Cam Saul's avatar
      MBQL :expressions should use strings for keys (#19960) · 5d5b33a1
      Cam Saul authored
      * MBQL :expressions should use strings for keys
      
      * Make some more tests ^:parallel
      
      * Update dox
      
      * Test fix
      
      * Sort namespaces
      Unverified
      5d5b33a1
  12. Jan 26, 2022
  13. Jan 25, 2022
  14. Jan 20, 2022
  15. Jan 19, 2022
  16. Jan 18, 2022
  17. Jan 13, 2022
  18. Dec 15, 2021
    • dpsutton's avatar
      Datasets preserve metadata (#19158) · 5f8bc305
      dpsutton authored
      * Preserve metadata and surface metadata of datasets
      
      Need to handle two cases:
      1. querying the dataset itself
      2. a question that is a nested question of the dataset
      
      1. Querying the dataset itself
      This is a bit of a shift of how metadata works. Previously it was just
      thrown away and saved on each query run. This kind of needs to be this
      way because when you edit a question, we do not ensure that the metadata
      stays in sync! There's some checksum operation that ensures that the
      metadata hasn't been tampered with, but it doesn't ensure that it
      actually matches the query any longer.
      
      So imagine you add a new column to a query. The metadata is not changed,
      but its checksum matches the original query's metadata and the backend
      happily saves this. Then on a subsequent run of the query (or if you hit
      visualize before saving) the metadata is tossed and updated.
      
      So to handle this carelessness, we have to allow the metadata that can
      be edited to persist across just running the dataset query. So when
      hitting the card api, stick the original metadata in the middleware (and
      update the normalize ns not to mangle field_ref -> field-ref among
      others). Once we have this smuggled in, when computing the metadata in
      annotate, we need a way to index the columns. The old and bad way was
      the following:
      
      ```clojure
      ;; old
      (let [field-id->metadata (u/key-by :id source-metadata)] ...)
      ;; new and better
      (let [ref->metadata (u/key-by (comp u/field-ref->key :field_ref) source-metadata)] )
      ```
      
      This change is important because ids are only for fields that map to
      actual database columns. computed columns, case, manipulations, and all
      native fields will lack this. But we can make field references.
      
      Then for each field in the newly computed metadata, allow the non-type
      information to persist. We do not want to override type information as
      this can break a query, but things like description, display name,
      semantic type can survive.
      
      This metadata is then saved in the db as always so we can continue with
      the bit of careless metadata saving that we do.
      
      2. a question that is a nested question of the dataset
      This was a simpler change to grab the source-metadata and ensure that it
      is blended into the result metadata in the same way.
      
      Things i haven't looked at yet: column renaming, if we need to allow
      conversions to carry through or if those necessarily must be opaque (ie,
      once it has been cast forget that it was originally a different type so
      we don't try to cast the already cast value), and i'm sure some other
      things. But it has been quite a pain to figure all of this stuff
      out. Especially the divide between native and mbql since native requires
      the first row of values back before it can detect some types.
      
      * Add in base-type specially
      
      Best to use field_refs to combine metadata from datasets. This means
      that we add this ref before the base-type is known. So we have to update
      this base-type later once they are known from sampling the results
      
      * Allow column information through
      
      I'm not sure how this base-type is set for
      annotate-native-cols. Presumably we don't have and we get it from the
      results but this is not true. I guess we do some analysis on count
      types. I'm not sure why they failed though.
      
      * Correctly infer this stuff
      
      This was annoying. I like :field_ref over :name for indexing, as it has
      a guaranteed unique name. But datasets will have unique names due to a
      restriction*. The problem was that annotating the native results before
      we had type information gave us refs like `[:field "foo" {:base-type
      :type/*}]`, but then this ruined the merge strategy at the end and
      prevented a proper ref being merged on top. Quite annoying. This stuff
      is very whack-a-mole in that you fix one bit and another breaks
      somewhere else**.
      
      * cannot have identical names for a subselect:
          select id from (select 1 as id, 2 as id)
      
      ** in fact, another test broke on this commit
      
      * Revert "Correctly infer this stuff"
      
      This reverts commit 1ffe44e90076b024efd231f84ea8062a281e69ab.
      
      * Annotate but de-annotate in a way
      
      To combine metadata from the db, really, really want to make sure they
      actually match up. Cannot use name as this could collide when there are
      two IDs in the same query. Combining metadata on that gets nasty real
      quick.
      
      For mbql and native, its best to use field_refs. Field_refs offer the
      best of both worlds: if id, we are golden and its by id. If by name,
      they have been uniquified already. So this will run into issues if you
      reorder a query or add a new column in with the same name but i think
      that's the theoretical best we can do.
      
      BUT, we have to do a little cleanup for this stuff. When native adds the
      field_ref, it needs to include some type information. But this isn't
      known until after the query runs for native since its just an opaque
      query until we run it. So annotating will add a `[:field name
      {:base_type :type/*}]` and then our merging doesn't clobber that
      later. So its best to add the field_refs, match up with any db metadata,
      and then remove the field_refs.
      
      * Test that metadata flows through
      
      * Test mbql datasets and questions based on datasets
      
      * Test mbql/native queries and nested queries
      
      * Recognize that native query bubbles into nested
      
      When using a nested query based on a native query, the metadata from the
      underlying dataset is used. Previously we would clobber this with the
      metadata from the expected cols of the wrapping mbql query. This would
      process the display name with `humanization/name->human-readable-name`
      whereas for native it goes through `u/qualified-name`.
      
      I originally piped the native's name through the humanization but that
      leads to lots of test failures, and perhaps correct failures. For
      instance, a csv test asserts the column title is "COUNT(*)" but the
      change would emit "Count(*)", a humanization of count(*) isn't
      necessarily an improvement nor even correct.
      
      It is possible that we could change this in the future but I'd want it
      to be a deliberate change. It should be mechanical, just adjusting
      `annotate-native-cols` in annotate.clj to return a humanized display
      name and then fixing tests.
      
      * Allow computed display name on top of source metadata name
      
      If we have a join, we want the "pretty" name to land on top of the
      underlying table's name. "alias → B Column" vs "B Column".
      
      * Put dataset metadata in info, not middleware
      
      * Move metadata back under dataset key in info
      
      We want to ensure that dataset information is propagated, but card
      information should be computed fresh each time. Including the card
      information each time leads to errors as it erroneously thinks the
      existing card info should shadow the dataset information. This is
      actually a tricky case: figuring out when to care about information at
      arbitrary points in the query processor.
      
      * Update metadata to :info not :middleware in tests
      
      * Make var private and comment about info metadata
      Unverified
      5f8bc305
  19. Dec 10, 2021
    • Cam Saul's avatar
      Big QP parameter refactor; validate param :types for Cards (#19188) · 0c4be936
      Cam Saul authored
      * Refactor: move Card and Dashboard QP code into their own qp.* namespaces
      
      * Disable extra validation for now so a million tests don't fail
      
      * WIP
      
      * Validate template tag :parameters in query in context of a Card
      
      * Fixes
      
      * Disable strict validation for now
      
      * Test fixes [WIP]
      
      * Make the parameter type schema a little more forgiving for now
      
      * Tests & test fixes :wrench:
      
      * More test fixes :wrench:
      
      * 1. Need more tests
      2. Need to actually validate stuff
      
      * More test fixes. :wrench:
      
      * Test fixes (again)
      
      * Test fix :wrench:
      
      * Some test fixes / PR feedback
      
      * Disallow native queries with a tag widget-type of "none"
      
      Template tags with a widget-type that is undefined, null, or "none" now
      cause the query's isRunnable method to return false. Existing questions
      that have this defect won't be runnable until they are resaved with a
      set widget-type.
      
      * Fix prettier error
      
      * add snippet and card types to validation pass
      
      * Make sure template tag map keys + `:names` agree + test fixes
      
      * Have MBQL normalization reconcile template tags map key and :name
      
      * Test fix :wrench:
      
      * Fix tests for Cljs
      
      * Fix Mongo tests.
      
      * Allow passing :category parameters for :text/:number/:date for now.
      
      * Dashboard subscriptions should use qp.dashboard code for executing
      
      * Make sure Dashboard QP parameter resolution code merges in default values
      
      * Add a test for sending a test Dashboard subscription with default params
      
      * Prettier
      
      * If both Dashboard and Card have default param value, prefer Card's default
      
      * Test fix :wrench:
      
      * More tests and more fixes :cry:
      
      
      
      Co-authored-by: default avatarDalton Johnson <daltojohnso@users.noreply.github.com>
      Unverified
      0c4be936
  20. Nov 29, 2021
  21. Nov 23, 2021
    • Ngoc Khuat's avatar
    • Cam Saul's avatar
      New dashboard query endpoints and consolidate Dashboard API/Public/Embed... · 0e820655
      Cam Saul authored
      New dashboard query endpoints and consolidate Dashboard API/Public/Embed parameter resolution code (#18994)
      
      * Code cleanup
      
      * Make the linters happy
      
      * Add pivot version of the new endpoints
      
      * implement usage of the new dashboard card query endpoint (#19012)
      
      * add new endpoints to services.js
      
      * replace CardApi.query with the new endpoint + pass dashboardId
      
      * add parameter id to parameter object found on query body
      
      * run dashchards using card query endpoint when they're new/unsaved on dashboard
      
      * Update endpoint references in e2e tests
      
      * Remove count check from e2e test
      
      We can make sure a double-mapped parameter filter shows results from
      both fields, but unfortunately with the new endpoint, the results don't
      seem to work anymore. I think that's OK? Maybe?
      
      * skip corrupted filters test
      
      the query endpoint now results in a 500 error caused by a schema
      mismatch
      
      * fix a few e2e intercepted request mismatches
      
      * unskip filter corruption test and remove the part that no longer works
      
      Co-authored-by: default avatarDalton <daltojohnso@users.noreply.github.com>
      Unverified
      0e820655
  22. Oct 19, 2021
  23. Oct 11, 2021
    • dpsutton's avatar
      Two column table (#18392) · 57506cc0
      dpsutton authored
      * Don't render bar charts are sparklines
      
      https://github.com/metabase/metabase/issues/18352
      
      A simple select `SELECT 'a', 1 UNION ALL SELECT 'b', 2` would trigger as
      a sparkline and blow up when comparing. There are two issues:
      - don't render tables as a sparkline even if it only has two columns
      - don't compare things that aren't comparable.
      
      this solves the first issue, ensuring that tables aren't matched as
      sparklines
      
      * Sparkline should only compare comparable values
      
      Check that the columns are temporal or numeric before comparing them. It
      is only to optionally reverse the order of the results which seems
      questionable in itself, but we can at least check that they are
      comparable before attempting to
      
      * Ensure tests render as sparklines
      
      default was a :table and that adds csv attachments which throw off our
      tests
      
      * unskip repro
      
      * remove unnecessary extra line
      Unverified
      57506cc0
  24. Oct 05, 2021
  25. Oct 01, 2021
  26. Sep 14, 2021
  27. Sep 09, 2021
  28. Aug 10, 2021
  29. Aug 09, 2021
  30. Jul 30, 2021
  31. Jul 29, 2021
  32. Jul 27, 2021
  33. Jul 22, 2021
    • Jeff Evans's avatar
      Fix serialization: Visualization column settings lost (#17127) · eb59ab27
      Jeff Evans authored
      Update visualization_settings.cljc to handle the table.columns submap
      
      Update dump and load to handle :visualization_settings on a Card (basically identically to how they are handled for a DashboardCard)
      
      Updating test to have a card include :visualization_settings and ensure they fields are properly handled
      
      Updating visualization_settings test case to incorporate table.columns as well as :show_mini_bar (under ::column_settings)
      Unverified
      eb59ab27
  34. Jun 11, 2021
  35. May 27, 2021
    • Jeff Evans's avatar
      Serialization fixes for x.39 (#15858) · 598a1124
      Jeff Evans authored
      Serialization refactoring and fixes
      
      *********************************************************
      * Support loading linked cards in different collections *
      *********************************************************
      
      Architectural changes:
      Lay foundation for more generic retry mechanism
      Establish a keyword to represent a failed name->id lookup that can happen during load, and map that to information about which part of the card resolution failed (for logging purposes)
      Similar to dashboards, add a separate fn for performing a load of cards
      Invoking the load-cards fn from the load multimethod implementation when some cards failed name resolution
      
      Test enhancements:
      Add new multimethod to perform more specific assertions on the loaded entities
      For cards - check that the native SQL of the loaded card matches that of the original (pre-dump) instance
      For dashboards - check that the linked card for each dashboard card series matches
      
      Bug fixes:
      Apply the mbql-id->fully-qualified-name fn on serialize to filter, breakout, and aggregation clauses
      Updating "My Test" card to include filter clause so that fix is exercised
      Instead of upserting users with a "changeme" password, simply do not attempt to change the password
      Adding :engine to the identity condition of Database (so unrelated test-data instances don't get trashed on load)
      
      Cleanup:
      Remove unused specs namespace
      
      ****************************************
      * Support dashboard card click actions *
      ****************************************
      
      Architectural changes:
      Add visualization settings shared model namespace with utility functions
      
      Adding tests for viz settings
      
      Changes to serialization:
      Update visualization_settings in dump to convert column_settings keys from field ID to fully qualified name, and update the click action target with the fully qualified name
      Also converting targetId in top level click action to fully qualified name
      
      Doing the reverse in load, using the same utility code
      
      Changes to serialization tests:
      Updating the existing dashcard in the "world" roundtrip test to include the click action
      
      Adding assertion for the click action in the assert-loaded-entity implementation for Dashboard
      
      ******************************************************************
      * Stop nesting cards and check for unresolved query dependencies *
      ******************************************************************
      
      In names.clj, stop qualifying a card's name by its dataset query's source card ID (this is what caused the "nested" directory structure)
      
      In load.clj, start checking that either the source query [:dataset_query :query] is resolved properly as a card ID or table ID, and if neither, then propagate an ::unresolved-names entry up into the card (causing it to be reloaded on second pass)
      
      Fix logging of unresolved name info
      
      Test changes:
      Add two new cards to the "world", one that lives in the root collection but depends on a card within a collection for its query, and one opposite (i.e. lives in a collection but depends on a root card for its query)
      
      Adding new check that collection names match after loading
      
      ************************
      * Logging improvements *
      ************************
      
      Include causes when logging errors to aid with diagnosis
      
      Add BEGIN and END messages for both dump and load commands
      
      Add dynamic var to suppress printing of name lookup error on first pass load
      
      *************************************************
      * Handle collection namespaces and retry pulses *
      *************************************************
      
      Architectural changes:
      Considering a collection namespace to be part of its fully qualified name (putting in a subdirectory called :namespace when writing to disk)
      Accounting for that change when dumping and loading, and upsert (i.e. namespace is part of unique columns)
      Bumping serialization format version to 2 because of this
      
      Changes to load:
      Add load-pulses fn that works similarly to others, to facilitate reloading
      Add similar logic as other entities to reload a pulse if the :card_id is not found
      
      Model changes:
      Add hack fn to compare namespace values for the "namespace does not change" assertion with appropriate comment
      
      Test changes:
      Added test for pulse with reload
      
      ***********************************************
      * Handle dimension entry within template-tags *
      ***********************************************
      
      Add :field to the recognized clauses under mbql-entity-reference? (since that is how field refs are now represented in MBQL)
      
      Add temp-field helper fn in test_util.clj to reduce some of the "world" verbosity
      
      Adding new test card to world that uses template-tags with a field reference under dimension
      
      ************************
      * Handle textbox cards *
      ************************
      
      Add support for passing textbox (virtual) cards through visualization_settings.cljc
      
      Updating load test to add a textbox type card to the dashboard and ensuring it is persisted
      
      ***************************************
      * Handle dimension parameter mappings *
      ***************************************
      
      In visualization_settings.cljc:
      Adding handling for parameterMapping
      Adding new roundtrip test for parameter mappings
      Some refactoring to split up larger fns
      
      In serialize.clj, convert all parts of the map that contain the dimension key to use fully qualified names for the respective fields
      
      In load.clj, do the opposite (convert back to new IDs)
      
      Updating test to add parameter mapping to an existing click action, and adding new assertions for that to ensure it is preserved
      
      Fixing serialize test now that the "ids" can themselves be maps (i.e within dimension vector)
      
      In visualization_settings_test.cljc
      Adding new roundtrip test for problem dashboard
      
      ***********************************
      * Handle field literals correctly *
      ***********************************
      
      On load, only attempt to translate field name to ID if it's actually a fully qualified field name
      
      Adding new fn to names.clj to provide that logic
      
      Adding new test card that uses such a field reference
      
      **************************************
      * Accept only-db-ids option for dump *
      **************************************
      
      Accept options map for the dump command.  Currently, the only supported option is `:only-db-ids`, which is expected to be a set of DB IDs to dump.  If set, then only these databases (and their respective tables, fields, and segments) will be dumped to the dump-dir.  If not set, then the previous behavior takes effect (all DBs).
      
      Update the load_test.clj to set this to be only the new temporary DB to avoid tramping on existing `sample-dataset` and other instances
      
      *****************************************************
      * Don't include personal collections of other users *
      *****************************************************
      
      Use the `select-collections` fn to filter only collections that are public, or owned bythe given user (by way of email), plus all collections with those as an ancestor
      
      Updating test to assert a non-owned personal collection is not persisted (including nested), but all nested personal collections are
      
      Adding new wrapped macro to also bind collection/*allow-deleting-personal-collections* true around the with-temp* call so that personal collections can legitimately be deleted (since they're temporary); the name of this macro was chosen to preserve the same amount of spacing within `with-world`
      
      *********************************
      * Support native query snippets *
      *********************************
      
      Adding support for NativeQuerySnippet:
      
      - in names
      - in dump (putting into subdirectory of collection, as cards)
      - in upsert (identity condition of :name :collection_id)
      - in load (no retry fn necessary since they should not have any dependencies themselves)
      
      Adding a snippet and a card referencing it to roundtrip test (including a deeply nested snippet)
      
      Fixing up handling of snippet namespaced collections and adding relevant name tests
      
      *********************************
      * Handle source-table for joins *
      *********************************
      
      Adding `mbql-fully-qualified-names->ids*` for recursive calls, which does not normalize its args, keeping mbql-fully-qualified-names->ids` as a wrapper to that
      
      Adding clause to `mbql-fully-qualified-names->ids*` to resolve `:source-table` when it's a fully qualified table name
      
      Adding util fn to names.clj to support that scenario
      
      Updating "My Card" test card to include a join
      
      Skipping two more drivers from `dump-load-entities-test` - SQLite and Spark because they don't support FKs
      
      ***************************
      * Cleanup and refactoring *
      ***************************
      
      Responding to PR feedback
      
      Updating upsert_test.clj to check that the identity-condition keys actually exist for the models
      
      Removing legacy MBQL syntax in a couple places
      
      Adding ordering by `:id` to serialization load command for predictability
      
      Lots of fixes and refactorings in visualization_settings.cljc:
       - added lots of specs and fdefs
       - added test fixture that instruments all the speced fns and required it from both visualization_settings.cljc and load_test.clj
       - renamed a bunch of fns for more consistency
       - added and fixed some docstrings
       - fix implementation of `parse-json-string` for cljs
      
      *************************************************
      * Preserve temporal-unit in source-field clause *
      *************************************************
      
      Update serialize and match macros to keep the rest of the match, when modifying the source field ID
      
      Adding a bunch of new tables in order to perform a "bucketing on implicit join" type query
      
      Updating test_util.clj to initialize the query processor with all the new temp tables so that queries actually work
      
      **************************
      * Misc cleanup and fixes *
      **************************
      
      Fixing the implementation of `fully-qualified-name->context` to work properly when the entity name itself is also a model name
      
      ***************************
      * Handle joining to cards *
      ***************************
      
      Recognize :source-table pointing to a fully qualified card name in `mbql-fully-qualified-names->ids*`, and adding the unresolved name capture to that lookup
      
      Adding new util fns to capture the unresolved named entries from anywhere within the MBQL query tree: `paths-to-key-in` and `gather-all-unresolved-names`
      
      Updating `resolve-card-dataset-query` to call this new `gather-all-unresolved-names` to pull all unresolved names from the MBQL query tree into card itself
      
      Renaming `fully-qualified-name->id-rec` to `resolve-source-query` for clarity, and removing now unneeded clause
      
      To the test case, adding a card that joins to another card
    • Cam Saul's avatar
      Fix expansion of :metric macros inside nested queries (#16241) · 8f18c64d
      Cam Saul authored
      * Failing test
      
      * Fix expansion of :metric macros inside nested queries
      
      * Unskip Cypress test
      
      * Test fix :wrench:
      
      
      
      * Rewrite (fix) repro for #12507
      
      Co-authored-by: default avatarNemanja <31325167+nemanjaglumac@users.noreply.github.com>
      Unverified
      8f18c64d
  36. May 24, 2021
    • dpsutton's avatar
      Yyyymmddhhmmss date strings (#15790) · 6a632764
      dpsutton authored
      * Add yyyymmddhhss coercions to type system
      
      * Implementations for h2/mysql/postgres for yyyymmddhhss bytes and strings
      
      * Mongo and oracle
      
      * Adding checksum on migration it said was bad
      
      * import OffsetDateTime
      
      * Redshift and bigquery
      
      * snowflake expectations
      
      * sql server yyyymmddhhmmss. have to format the string then parse
      
      sqlserver lacks a parse function that takes a format string, they just
      take an integer that specifies a predefined format string. So we have
      to make the string into the right format then parse.
      
      * presto yyyymmddhhmmss
      
      * Test byte conversions
      
      * Remove errant `mt/set-test-drivers!`
      
      * Remove sqlite, change keyword for multiple native types in def
      
      the spec couldn't handle different shapes under the same keyword. so
      just use :natives {:postgres "BYTEA"} :native "BYTEA"
      
      * Make schema work with different shape maps
      
      * hx/raw "'foo'" -> hx/literal "foo" and remove checksums
      
      * _coercion_strategy -> _coercion-strategy
      
      * Handle coercion hierarchy for :Coercion/YYYYMMDDHHMMSSBytes->Temporal
      Unverified
      6a632764
Loading