Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/metabase/metabase. Pull mirroring updated .
  1. Jan 13, 2022
  2. Jan 12, 2022
    • Chris Wu's avatar
      Add Pulse URL param to email settings (#18879) · 4ac9b7de
      Chris Wu authored
      * Add Pulse URL param to email settings
      
      * Undo frontend changes -- make config only allowable by env var
      
      * Make config enterprise only
      
      * Address feedback
      
      * Change namespace and token feature
      
      * newline
      
      * Rename var and add docs
      4ac9b7de
    • Jeff Evans's avatar
      Sync multiple schemas for BigQuery (#19547) · 526594f4
      Jeff Evans authored
      Add functions to describe-database namespace for performing inclusive/exclusive schema filtering
      
      Add tests for these functions in isolation
      
      Update `:bigquery-cloud-sdk` driver and QP code to remove the single `dataset-id` conn-param
      
      Changing :bigquery-cloud-sdk` driver to use dataset inclusion/exclusion filters instead
      
      Updating `add.cy.spec.js` Cypress test to account for new structure
      
      Add data upgrade (via `driver/normalize-db-details` implementation) to change `dataset-id` property to inclusion filter
      
      Add test to confirm data upgrade
      
      Adding server side expansion of `:schema-filters` type connection property
      526594f4
    • Noah Moss's avatar
    • Alexander Polyankin's avatar
      Slack app settings (#19597) · badc6c13
      Alexander Polyankin authored
      badc6c13
  3. Jan 11, 2022
  4. Jan 10, 2022
    • Howon Lee's avatar
      Waterfall bignum display fix (#19025) (#19563) · 0c190ec6
      Howon Lee authored
      This actually has nothing to do with waterfalls. Really, it's that static viz is brutally incompatible with bignum types, bigint and bigdec. However, nothing will fit in the screen anyways if you actually need a bignum or bigint type so the solution was to just coerce and be OK with exploding if it actually needed bignum range.
      0c190ec6
  5. Jan 07, 2022
    • Jeff Evans's avatar
      Allow override of WEEK_START in Snowflake driver (#19375) · 2d07f70a
      Jeff Evans authored
      * Allow override of WEEK_START in Snowflake driver
      
      Incorporate the following sources of WEEK_START, in decreasing order of precedence:
      1) the connection string, a.k.a. additional-options (ex: WEEK_START=N)
      2) the Metabase "start-of-week" setting (i.e. day of a week)
      3) the default value (7: Sunday)
      
      Adding new fn to the driver.common namespace to parse a connection string param out of additional-settings (plus a test for that)
      
      Adding Snowflake driver test to ensure that all cases above are covered as expected, by invoking the DAYOFWEEK function on a set of fixed dates
      
      Update documentation for start-of-week setting to clarify that it now influences this behavior
      
      Swap out parse-additional-options-value for additional-options->map instead (more generic)
      
      Update test accordingly
      2d07f70a
  6. Jan 06, 2022
  7. Jan 04, 2022
  8. Jan 03, 2022
  9. Dec 30, 2021
  10. Dec 29, 2021
  11. Dec 28, 2021
    • Cam Saul's avatar
      Database-local Settings (#19399) · b9bee5dc
      Cam Saul authored
      * Rename setting/get and setting/all; GeoJSON via env var tests
      
      * Fix typo (thanks @noahmoss)
      
      * Support Database-local Settings in metabase.models.setting itself
      
      * Rework Settings code so it can handle possibly-already-deserialized values
      
      * Database-local Settings
      
      * Remove empty part of docstring
      
      * Appease linters
      
      * Update dox again
      
      * Use text.type for the new column
      
      * Test fixes :wrench:
      
      * Test fix :wrench:
      
      * Test fix :wrench:
      
      * All negative integer values when setting :integer Setting with a string
      b9bee5dc
    • dpsutton's avatar
      Handle nested queries which have agg at top level and nested (#19437) · 50818a92
      dpsutton authored
      * Handle nested queries which have agg at top level and nested
      
      Previously when matching columns in the outer query with columns in the
      inner query we had use id, and then recently, field_ref. This is
      problematic for aggregations.
      
      Consider https://github.com/metabase/metabase/issues/19403
      
      The mbql for this query is
      
      ```clojure
      {:type :query
       :query {:aggregation [[:aggregation-options
                              [:count]
                              {:name "count"}]
                             [:aggregation-options
                              [:avg
                               [:field
                                "sum"
                                {:base-type :type/Float}]]
                              {:name "avg"}]]
               :limit 10
               :source-card-id 1960
               :source-query {:source-table 1
                              :aggregation [[:aggregation-options
                                             [:sum
                                              [:field 23 nil]]
                                             {:name "sum"}]
                                            [:aggregation-options
                                             [:max
                                              [:field 28 nil]]
                                             {:name "max"}]]
                              :breakout [[:field 26 nil]]
                              :order-by [[:asc
                                          [:field 26 nil]]]}}
       :database 1}
      ```
      
      The aggregations in the top level select will be type checked as :name
      "count" :field_ref [:aggregation 0]. The aggregations in the nested
      query will be turned into :name "sum" :field_ref [:aggregation 0]! This
      is because aggregations are numbered "on their level" and not
      globally. So when the fields on the top level look at the metadata for
      the nested query and merge it, it unifies the two [:aggregation 0]
      fields but this is INCORRECT. These aggregations are not the same, they
      just happen to be the first aggregations at each level.
      
      Its illustrative to see what a (select * from (query with aggregations))
      looks like:
      
      ```clojure
      {:database 1
       :query {:source-card-id 1960
               :source-metadata [{:description "The type of product, valid values include: Doohicky, Gadget, Gizmo and Widget"
                                  :semantic_type :type/Category
                                  :coercion_strategy nil
                                  :name "CATEGORY"
                                  :field_ref [:field 26 nil]
                                  :effective_type :type/Text
                                  :id 26
                                  :display_name "Category"
                                  :fingerprint {:global {:distinct-count 4
                                                         :nil% 0}
                                                :type {:type/Text {:percent-json 0
                                                                   :percent-url 0
                                                                   :percent-email 0
                                                                   :percent-state 0
                                                                   :average-length 6.375}}}
                                  :base_type :type/Text}
                                 {:name "sum"
                                  :display_name "Sum of Price"
                                  :base_type :type/Float
                                  :effective_type :type/Float
                                  :semantic_type nil
                                  :field_ref [:aggregation 0]}
                                 {:name "max"
                                  :display_name "Max of Rating"
                                  :base_type :type/Float
                                  :effective_type :type/Float
                                  :semantic_type :type/Score
                                  :field_ref [:aggregation 1]}]
               :fields ([:field 26 nil]
                        [:field
                         "sum"
                         {:base-type :type/Float}]
                        [:field
                         "max"
                         {:base-type :type/Float}])
               :source-query {:source-table 1
                              :aggregation [[:aggregation-options
                                             [:sum
                                              [:field 23 nil]]
                                             {:name "sum"}]
                                            [:aggregation-options
                                             [:max
                                              [:field 28 nil]]
                                             {:name "max"}]]
                              :breakout [[:field 26 nil]]
                              :order-by [[:asc
                                          [:field 26 nil]]]}}
       :type :query
       :middleware {:js-int-to-string? true
                    :add-default-userland-constraints? true}
       :info {:executed-by 1
              :context :ad-hoc
              :card-id 1960
              :nested? true
              :query-hash #object["[B" 0x10227bf4 "[B@10227bf4"]}
       :constraints {:max-results 10000
                     :max-results-bare-rows 2000}}
      ```
      
      The important bits are that it understands the nested query's metadata
      to be
      
      ```clojure
      {:name "sum"
       :display_name "Sum of Price"
       :field_ref [:aggregation 0]}
      {:name "max"
       :display_name "Max of Rating"
       :field_ref [:aggregation 1]}
      ```
      
      And the fields on the outer query to be:
      ```clojure
      ([:field
        "sum"
        {:base-type :type/Float}]
       [:field
        "max"
        {:base-type :type/Float}])
      ```
      
      So there's the mismatch: the field_ref on the outside is [:field "sum"]
      but on the inside the field_ref is [:aggregation 0]. So the best way to
      match up when "looking down" into sub queries is by id and then by name.
      
      * Some drivers return 4.0 instead of 4 so make them all ints
      
      * Handle dataset metadata in a special way
      
      rather than trying to set confusing merge rules, just special case
      metadata from datasets.
      
      Previously, was trying to merge the "preserved keys" on top of the
      normal merge order. This caused lots of issues. They were
      trivial. Native display names are very close to the column name, whereas
      mbql names go through some humanization. So when you select price
      from (select PRICE ...) its an mbql with a nested native query. The
      merge order meant that the display name went from "Price"
      previously (the mbql nice name for the outer select) to "PRICE", the
      underlying native column name. Now we don't special case the display
      name (and other fields) of regular source-metadata.
      
      Also, there were issues with mbql on top of an mbql dataset. Since it is
      all mbql, everything is pulled from the database. So if there were
      overrides in the nested mbql dataset, like description, display name,
      etc, the outer field select already had the display name, etc. from the
      database rather than allowing the edits to override from the nested
      query.
      
      Also, using a long `:source-query/dataset?` keyword so it is far easier
      to find where this is set. With things called just `:dataset` it can be
      quite hard to find where these keys are used. When using the namespaced
      keyword, greping and finding usages is trivial. And the namespace gives
      more context
      50818a92
  12. Dec 27, 2021
  13. Dec 26, 2021
  14. Dec 24, 2021
  15. Dec 22, 2021
  16. Dec 21, 2021
  17. Dec 20, 2021
    • Jeff Evans's avatar
      Fix file upload of secret values (#19398) · b8b1b28a
      Jeff Evans authored
      * Fix file upload of secret values
      
      Fix logic in `db-details-client->server` to properly consider the `-options` suffixed connection property from the client version, to determine the treatment to be applied (i.e. base64 decode)
      
      Change logic in `db-details-client->server` to `assoc` nil rather than `dissoc` unused keywords (so they override the saved db details on merge)
      
      Update `driver.u/database->driver*` so that the engine loaded from the database instance is *always* a keyword
      
      Update `metabase.test.data.oracle/connection-details` to pass the property vals through the client->server translation layer
      
      Change `creator_id` in `secret` model to be nullable (since, when creating test DBs, the API current user is deliberately set to nil)
      
      Switch the logic in Oracle SSL test to iterate on the file-path and uploaded variants to the test itself to get around weird CircleCI/environment issues
      
      Use `rds_root_ca_truststore.jks` for the test instead since we don't need the full cacerts for this
      b8b1b28a
  18. Dec 17, 2021
    • Anton Kulyk's avatar
      Respect disabled nested queries on the frontend (#19356) · 4138b8ae
      Anton Kulyk authored
      * Reproduce #19341
      
      * Reproduce unexpected behaviour for dataset picker
      
      * Hide "Explore results" button
      
      * Hide "Saved Questions" database from data picker
      
      * Hide "Saved Questions" step from dataset-style data picker
      
      * Set visibility :auth for "enable-nested-queries"
      
      * Fix datasets pickers navigation
      
      * Fix test
      
      * Fix saved questions appear in data selector search
      4138b8ae
  19. Dec 16, 2021
  20. Dec 15, 2021
    • Howon Lee's avatar
      Fix combos in tables of column cardinality bigger than the columns being used... · 46c7180d
      Howon Lee authored
      Fix combos in tables of column cardinality bigger than the columns being used in combo being broken (#19368)
      
      Pursuant to #18676. Whacks a regression from the second fix of the combo viz
      46c7180d
    • dpsutton's avatar
      Datasets preserve metadata (#19158) · 5f8bc305
      dpsutton authored
      * Preserve metadata and surface metadata of datasets
      
      Need to handle two cases:
      1. querying the dataset itself
      2. a question that is a nested question of the dataset
      
      1. Querying the dataset itself
      This is a bit of a shift of how metadata works. Previously it was just
      thrown away and saved on each query run. This kind of needs to be this
      way because when you edit a question, we do not ensure that the metadata
      stays in sync! There's some checksum operation that ensures that the
      metadata hasn't been tampered with, but it doesn't ensure that it
      actually matches the query any longer.
      
      So imagine you add a new column to a query. The metadata is not changed,
      but its checksum matches the original query's metadata and the backend
      happily saves this. Then on a subsequent run of the query (or if you hit
      visualize before saving) the metadata is tossed and updated.
      
      So to handle this carelessness, we have to allow the metadata that can
      be edited to persist across just running the dataset query. So when
      hitting the card api, stick the original metadata in the middleware (and
      update the normalize ns not to mangle field_ref -> field-ref among
      others). Once we have this smuggled in, when computing the metadata in
      annotate, we need a way to index the columns. The old and bad way was
      the following:
      
      ```clojure
      ;; old
      (let [field-id->metadata (u/key-by :id source-metadata)] ...)
      ;; new and better
      (let [ref->metadata (u/key-by (comp u/field-ref->key :field_ref) source-metadata)] )
      ```
      
      This change is important because ids are only for fields that map to
      actual database columns. computed columns, case, manipulations, and all
      native fields will lack this. But we can make field references.
      
      Then for each field in the newly computed metadata, allow the non-type
      information to persist. We do not want to override type information as
      this can break a query, but things like description, display name,
      semantic type can survive.
      
      This metadata is then saved in the db as always so we can continue with
      the bit of careless metadata saving that we do.
      
      2. a question that is a nested question of the dataset
      This was a simpler change to grab the source-metadata and ensure that it
      is blended into the result metadata in the same way.
      
      Things i haven't looked at yet: column renaming, if we need to allow
      conversions to carry through or if those necessarily must be opaque (ie,
      once it has been cast forget that it was originally a different type so
      we don't try to cast the already cast value), and i'm sure some other
      things. But it has been quite a pain to figure all of this stuff
      out. Especially the divide between native and mbql since native requires
      the first row of values back before it can detect some types.
      
      * Add in base-type specially
      
      Best to use field_refs to combine metadata from datasets. This means
      that we add this ref before the base-type is known. So we have to update
      this base-type later once they are known from sampling the results
      
      * Allow column information through
      
      I'm not sure how this base-type is set for
      annotate-native-cols. Presumably we don't have and we get it from the
      results but this is not true. I guess we do some analysis on count
      types. I'm not sure why they failed though.
      
      * Correctly infer this stuff
      
      This was annoying. I like :field_ref over :name for indexing, as it has
      a guaranteed unique name. But datasets will have unique names due to a
      restriction*. The problem was that annotating the native results before
      we had type information gave us refs like `[:field "foo" {:base-type
      :type/*}]`, but then this ruined the merge strategy at the end and
      prevented a proper ref being merged on top. Quite annoying. This stuff
      is very whack-a-mole in that you fix one bit and another breaks
      somewhere else**.
      
      * cannot have identical names for a subselect:
          select id from (select 1 as id, 2 as id)
      
      ** in fact, another test broke on this commit
      
      * Revert "Correctly infer this stuff"
      
      This reverts commit 1ffe44e90076b024efd231f84ea8062a281e69ab.
      
      * Annotate but de-annotate in a way
      
      To combine metadata from the db, really, really want to make sure they
      actually match up. Cannot use name as this could collide when there are
      two IDs in the same query. Combining metadata on that gets nasty real
      quick.
      
      For mbql and native, its best to use field_refs. Field_refs offer the
      best of both worlds: if id, we are golden and its by id. If by name,
      they have been uniquified already. So this will run into issues if you
      reorder a query or add a new column in with the same name but i think
      that's the theoretical best we can do.
      
      BUT, we have to do a little cleanup for this stuff. When native adds the
      field_ref, it needs to include some type information. But this isn't
      known until after the query runs for native since its just an opaque
      query until we run it. So annotating will add a `[:field name
      {:base_type :type/*}]` and then our merging doesn't clobber that
      later. So its best to add the field_refs, match up with any db metadata,
      and then remove the field_refs.
      
      * Test that metadata flows through
      
      * Test mbql datasets and questions based on datasets
      
      * Test mbql/native queries and nested queries
      
      * Recognize that native query bubbles into nested
      
      When using a nested query based on a native query, the metadata from the
      underlying dataset is used. Previously we would clobber this with the
      metadata from the expected cols of the wrapping mbql query. This would
      process the display name with `humanization/name->human-readable-name`
      whereas for native it goes through `u/qualified-name`.
      
      I originally piped the native's name through the humanization but that
      leads to lots of test failures, and perhaps correct failures. For
      instance, a csv test asserts the column title is "COUNT(*)" but the
      change would emit "Count(*)", a humanization of count(*) isn't
      necessarily an improvement nor even correct.
      
      It is possible that we could change this in the future but I'd want it
      to be a deliberate change. It should be mechanical, just adjusting
      `annotate-native-cols` in annotate.clj to return a humanized display
      name and then fixing tests.
      
      * Allow computed display name on top of source metadata name
      
      If we have a join, we want the "pretty" name to land on top of the
      underlying table's name. "alias → B Column" vs "B Column".
      
      * Put dataset metadata in info, not middleware
      
      * Move metadata back under dataset key in info
      
      We want to ensure that dataset information is propagated, but card
      information should be computed fresh each time. Including the card
      information each time leads to errors as it erroneously thinks the
      existing card info should shadow the dataset information. This is
      actually a tricky case: figuring out when to care about information at
      arbitrary points in the query processor.
      
      * Update metadata to :info not :middleware in tests
      
      * Make var private and comment about info metadata
      5f8bc305
    • Cam Saul's avatar
      Fix X-Rays with filters (#19370) · 4361d6f7
      Cam Saul authored
      4361d6f7
  21. Dec 14, 2021
    • adam-james's avatar
      Reset password use site url bugfix #14028 (#19367) · c76ea739
      adam-james authored
      
      * Patch from a 1:1 with Dan.
      This is a set of changes made while pairing with Dan, where we tried a few different approaches to getting the proper site url in the email.
      
      * Changed predicate inside forgot password test
      
      * Remove hostname as arg for password reset email
      The email template was changed to use siteUrl, so hostname is unused.
      
      * use DateAllOptionsWidget for single date and range date filters (#19360)
      
      * use DateAllOptionsWidget for single date and range date filters
      
      * fix specs
      
      * fix specs
      
      * add more specs
      
      * Patch from a 1:1 with Dan.
      This is a set of changes made while pairing with Dan, where we tried a few different approaches to getting the proper site url in the email.
      
      * Changed predicate inside forgot password test
      
      * Remove hostname as arg for password reset email
      The email template was changed to use siteUrl, so hostname is unused.
      
      Co-authored-by: default avatarAlexander Lesnenko <alxnddr@users.noreply.github.com>
      c76ea739
    • Howon Lee's avatar
      Negative progress bars valid, different color (#19352) · e6c12996
      Howon Lee authored
      Negative progress bars should display as progress bars w/ progress 0.
      
      Also changes default color of progress bar to green.
      e6c12996
  22. Dec 13, 2021
Loading