This project is mirrored from https://github.com/metabase/metabase.
Pull mirroring updated .
- Jan 04, 2022
-
-
Noah Moss authored
-
- Jan 03, 2022
-
-
Howon Lee authored
One possible combination of the many ways to have multiplicity in charts is to have a single card with singular chart type with multiple y-axes. This makes the general line-area-bar chart compatible with this.
-
Howon Lee authored
We had some CI breaks that are predicated on the year changing. Fix them.
-
- Dec 28, 2021
-
-
Cam Saul authored
* Rename setting/get and setting/all; GeoJSON via env var tests * Fix typo (thanks @noahmoss) * Support Database-local Settings in metabase.models.setting itself * Rework Settings code so it can handle possibly-already-deserialized values * Database-local Settings * Remove empty part of docstring * Appease linters * Update dox again * Use text.type for the new column * Test fixes
* Test fix * Test fix * All negative integer values when setting :integer Setting with a string -
dpsutton authored
* Handle nested queries which have agg at top level and nested Previously when matching columns in the outer query with columns in the inner query we had use id, and then recently, field_ref. This is problematic for aggregations. Consider https://github.com/metabase/metabase/issues/19403 The mbql for this query is ```clojure {:type :query :query {:aggregation [[:aggregation-options [:count] {:name "count"}] [:aggregation-options [:avg [:field "sum" {:base-type :type/Float}]] {:name "avg"}]] :limit 10 :source-card-id 1960 :source-query {:source-table 1 :aggregation [[:aggregation-options [:sum [:field 23 nil]] {:name "sum"}] [:aggregation-options [:max [:field 28 nil]] {:name "max"}]] :breakout [[:field 26 nil]] :order-by [[:asc [:field 26 nil]]]}} :database 1} ``` The aggregations in the top level select will be type checked as :name "count" :field_ref [:aggregation 0]. The aggregations in the nested query will be turned into :name "sum" :field_ref [:aggregation 0]! This is because aggregations are numbered "on their level" and not globally. So when the fields on the top level look at the metadata for the nested query and merge it, it unifies the two [:aggregation 0] fields but this is INCORRECT. These aggregations are not the same, they just happen to be the first aggregations at each level. Its illustrative to see what a (select * from (query with aggregations)) looks like: ```clojure {:database 1 :query {:source-card-id 1960 :source-metadata [{:description "The type of product, valid values include: Doohicky, Gadget, Gizmo and Widget" :semantic_type :type/Category :coercion_strategy nil :name "CATEGORY" :field_ref [:field 26 nil] :effective_type :type/Text :id 26 :display_name "Category" :fingerprint {:global {:distinct-count 4 :nil% 0} :type {:type/Text {:percent-json 0 :percent-url 0 :percent-email 0 :percent-state 0 :average-length 6.375}}} :base_type :type/Text} {:name "sum" :display_name "Sum of Price" :base_type :type/Float :effective_type :type/Float :semantic_type nil :field_ref [:aggregation 0]} {:name "max" :display_name "Max of Rating" :base_type :type/Float :effective_type :type/Float :semantic_type :type/Score :field_ref [:aggregation 1]}] :fields ([:field 26 nil] [:field "sum" {:base-type :type/Float}] [:field "max" {:base-type :type/Float}]) :source-query {:source-table 1 :aggregation [[:aggregation-options [:sum [:field 23 nil]] {:name "sum"}] [:aggregation-options [:max [:field 28 nil]] {:name "max"}]] :breakout [[:field 26 nil]] :order-by [[:asc [:field 26 nil]]]}} :type :query :middleware {:js-int-to-string? true :add-default-userland-constraints? true} :info {:executed-by 1 :context :ad-hoc :card-id 1960 :nested? true :query-hash #object["[B" 0x10227bf4 "[B@10227bf4"]} :constraints {:max-results 10000 :max-results-bare-rows 2000}} ``` The important bits are that it understands the nested query's metadata to be ```clojure {:name "sum" :display_name "Sum of Price" :field_ref [:aggregation 0]} {:name "max" :display_name "Max of Rating" :field_ref [:aggregation 1]} ``` And the fields on the outer query to be: ```clojure ([:field "sum" {:base-type :type/Float}] [:field "max" {:base-type :type/Float}]) ``` So there's the mismatch: the field_ref on the outside is [:field "sum"] but on the inside the field_ref is [:aggregation 0]. So the best way to match up when "looking down" into sub queries is by id and then by name. * Some drivers return 4.0 instead of 4 so make them all ints * Handle dataset metadata in a special way rather than trying to set confusing merge rules, just special case metadata from datasets. Previously, was trying to merge the "preserved keys" on top of the normal merge order. This caused lots of issues. They were trivial. Native display names are very close to the column name, whereas mbql names go through some humanization. So when you select price from (select PRICE ...) its an mbql with a nested native query. The merge order meant that the display name went from "Price" previously (the mbql nice name for the outer select) to "PRICE", the underlying native column name. Now we don't special case the display name (and other fields) of regular source-metadata. Also, there were issues with mbql on top of an mbql dataset. Since it is all mbql, everything is pulled from the database. So if there were overrides in the nested mbql dataset, like description, display name, etc, the outer field select already had the display name, etc. from the database rather than allowing the edits to override from the nested query. Also, using a long `:source-query/dataset?` keyword so it is far easier to find where this is set. With things called just `:dataset` it can be quite hard to find where these keys are used. When using the namespaced keyword, greping and finding usages is trivial. And the namespace gives more context
-
- Dec 21, 2021
-
-
Noah Moss authored
-
- Dec 20, 2021
-
-
Jeff Evans authored
* Fix file upload of secret values Fix logic in `db-details-client->server` to properly consider the `-options` suffixed connection property from the client version, to determine the treatment to be applied (i.e. base64 decode) Change logic in `db-details-client->server` to `assoc` nil rather than `dissoc` unused keywords (so they override the saved db details on merge) Update `driver.u/database->driver*` so that the engine loaded from the database instance is *always* a keyword Update `metabase.test.data.oracle/connection-details` to pass the property vals through the client->server translation layer Change `creator_id` in `secret` model to be nullable (since, when creating test DBs, the API current user is deliberately set to nil) Switch the logic in Oracle SSL test to iterate on the file-path and uploaded variants to the test itself to get around weird CircleCI/environment issues Use `rds_root_ca_truststore.jks` for the test instead since we don't need the full cacerts for this
-
- Dec 16, 2021
-
-
Cam Saul authored
* Rename setting/get and setting/all; GeoJSON via env var tests * Fix typo (thanks @noahmoss)
-
Dalton authored
* Remove references to flag in FE code * Remove flag from BE code * Remove/update tests related to the feature flag * Show widget-type selection of 'String' for category & location template tags
-
Cam Saul authored
* Validate Dashboard parameter types * Test/lint fixes
-
dpsutton authored
* Honor `enable-nested-queries` setting in expansion Honor the nested queries setting when expanding queries in the middleware * sort ns
-
- Dec 15, 2021
-
-
Howon Lee authored
Fix combos in tables of column cardinality bigger than the columns being used in combo being broken (#19368) Pursuant to #18676. Whacks a regression from the second fix of the combo viz
-
dpsutton authored
* Preserve metadata and surface metadata of datasets Need to handle two cases: 1. querying the dataset itself 2. a question that is a nested question of the dataset 1. Querying the dataset itself This is a bit of a shift of how metadata works. Previously it was just thrown away and saved on each query run. This kind of needs to be this way because when you edit a question, we do not ensure that the metadata stays in sync! There's some checksum operation that ensures that the metadata hasn't been tampered with, but it doesn't ensure that it actually matches the query any longer. So imagine you add a new column to a query. The metadata is not changed, but its checksum matches the original query's metadata and the backend happily saves this. Then on a subsequent run of the query (or if you hit visualize before saving) the metadata is tossed and updated. So to handle this carelessness, we have to allow the metadata that can be edited to persist across just running the dataset query. So when hitting the card api, stick the original metadata in the middleware (and update the normalize ns not to mangle field_ref -> field-ref among others). Once we have this smuggled in, when computing the metadata in annotate, we need a way to index the columns. The old and bad way was the following: ```clojure ;; old (let [field-id->metadata (u/key-by :id source-metadata)] ...) ;; new and better (let [ref->metadata (u/key-by (comp u/field-ref->key :field_ref) source-metadata)] ) ``` This change is important because ids are only for fields that map to actual database columns. computed columns, case, manipulations, and all native fields will lack this. But we can make field references. Then for each field in the newly computed metadata, allow the non-type information to persist. We do not want to override type information as this can break a query, but things like description, display name, semantic type can survive. This metadata is then saved in the db as always so we can continue with the bit of careless metadata saving that we do. 2. a question that is a nested question of the dataset This was a simpler change to grab the source-metadata and ensure that it is blended into the result metadata in the same way. Things i haven't looked at yet: column renaming, if we need to allow conversions to carry through or if those necessarily must be opaque (ie, once it has been cast forget that it was originally a different type so we don't try to cast the already cast value), and i'm sure some other things. But it has been quite a pain to figure all of this stuff out. Especially the divide between native and mbql since native requires the first row of values back before it can detect some types. * Add in base-type specially Best to use field_refs to combine metadata from datasets. This means that we add this ref before the base-type is known. So we have to update this base-type later once they are known from sampling the results * Allow column information through I'm not sure how this base-type is set for annotate-native-cols. Presumably we don't have and we get it from the results but this is not true. I guess we do some analysis on count types. I'm not sure why they failed though. * Correctly infer this stuff This was annoying. I like :field_ref over :name for indexing, as it has a guaranteed unique name. But datasets will have unique names due to a restriction*. The problem was that annotating the native results before we had type information gave us refs like `[:field "foo" {:base-type :type/*}]`, but then this ruined the merge strategy at the end and prevented a proper ref being merged on top. Quite annoying. This stuff is very whack-a-mole in that you fix one bit and another breaks somewhere else**. * cannot have identical names for a subselect: select id from (select 1 as id, 2 as id) ** in fact, another test broke on this commit * Revert "Correctly infer this stuff" This reverts commit 1ffe44e90076b024efd231f84ea8062a281e69ab. * Annotate but de-annotate in a way To combine metadata from the db, really, really want to make sure they actually match up. Cannot use name as this could collide when there are two IDs in the same query. Combining metadata on that gets nasty real quick. For mbql and native, its best to use field_refs. Field_refs offer the best of both worlds: if id, we are golden and its by id. If by name, they have been uniquified already. So this will run into issues if you reorder a query or add a new column in with the same name but i think that's the theoretical best we can do. BUT, we have to do a little cleanup for this stuff. When native adds the field_ref, it needs to include some type information. But this isn't known until after the query runs for native since its just an opaque query until we run it. So annotating will add a `[:field name {:base_type :type/*}]` and then our merging doesn't clobber that later. So its best to add the field_refs, match up with any db metadata, and then remove the field_refs. * Test that metadata flows through * Test mbql datasets and questions based on datasets * Test mbql/native queries and nested queries * Recognize that native query bubbles into nested When using a nested query based on a native query, the metadata from the underlying dataset is used. Previously we would clobber this with the metadata from the expected cols of the wrapping mbql query. This would process the display name with `humanization/name->human-readable-name` whereas for native it goes through `u/qualified-name`. I originally piped the native's name through the humanization but that leads to lots of test failures, and perhaps correct failures. For instance, a csv test asserts the column title is "COUNT(*)" but the change would emit "Count(*)", a humanization of count(*) isn't necessarily an improvement nor even correct. It is possible that we could change this in the future but I'd want it to be a deliberate change. It should be mechanical, just adjusting `annotate-native-cols` in annotate.clj to return a humanized display name and then fixing tests. * Allow computed display name on top of source metadata name If we have a join, we want the "pretty" name to land on top of the underlying table's name. "alias → B Column" vs "B Column". * Put dataset metadata in info, not middleware * Move metadata back under dataset key in info We want to ensure that dataset information is propagated, but card information should be computed fresh each time. Including the card information each time leads to errors as it erroneously thinks the existing card info should shadow the dataset information. This is actually a tricky case: figuring out when to care about information at arbitrary points in the query processor. * Update metadata to :info not :middleware in tests * Make var private and comment about info metadata
-
Cam Saul authored
* Fix logging utils * Test fixes
-
Cam Saul authored
-
- Dec 14, 2021
-
-
adam-james authored
* Patch from a 1:1 with Dan. This is a set of changes made while pairing with Dan, where we tried a few different approaches to getting the proper site url in the email. * Changed predicate inside forgot password test * Remove hostname as arg for password reset email The email template was changed to use siteUrl, so hostname is unused. * use DateAllOptionsWidget for single date and range date filters (#19360) * use DateAllOptionsWidget for single date and range date filters * fix specs * fix specs * add more specs * Patch from a 1:1 with Dan. This is a set of changes made while pairing with Dan, where we tried a few different approaches to getting the proper site url in the email. * Changed predicate inside forgot password test * Remove hostname as arg for password reset email The email template was changed to use siteUrl, so hostname is unused. Co-authored-by:
Alexander Lesnenko <alxnddr@users.noreply.github.com>
-
Howon Lee authored
Negative progress bars should display as progress bars w/ progress 0. Also changes default color of progress bar to green.
-
- Dec 13, 2021
-
-
Howon Lee authored
Static visualizations for area charts. Pursuant to #18676
-
Noah Moss authored
Co-authored-by:
Alexander Polyankin <alexander.polyankin@metabase.com>
-
- Dec 10, 2021
-
-
Cam Saul authored
* Refactor: move Card and Dashboard QP code into their own qp.* namespaces * Disable extra validation for now so a million tests don't fail * WIP * Validate template tag :parameters in query in context of a Card * Fixes * Disable strict validation for now * Test fixes [WIP] * Make the parameter type schema a little more forgiving for now * Tests & test fixes
* More test fixes * 1. Need more tests 2. Need to actually validate stuff * More test fixes. * Test fixes (again) * Test fix * Some test fixes / PR feedback * Disallow native queries with a tag widget-type of "none" Template tags with a widget-type that is undefined, null, or "none" now cause the query's isRunnable method to return false. Existing questions that have this defect won't be runnable until they are resaved with a set widget-type. * Fix prettier error * add snippet and card types to validation pass * Make sure template tag map keys + `:names` agree + test fixes * Have MBQL normalization reconcile template tags map key and :name * Test fix * Fix tests for Cljs * Fix Mongo tests. * Allow passing :category parameters for :text/:number/:date for now. * Dashboard subscriptions should use qp.dashboard code for executing * Make sure Dashboard QP parameter resolution code merges in default values * Add a test for sending a test Dashboard subscription with default params * Prettier * If both Dashboard and Card have default param value, prefer Card's default * Test fix * More tests and more fixes Co-authored-by:Dalton Johnson <daltojohnso@users.noreply.github.com>
-
Jeff Evans authored
* Bump log4j from 2.14.1 to 2.15.0 * Disable failing logging tests when bumping log4j 0day in log4j requires bump in dependency. These tests look for logs in testing but our test logger doesn't seem to have levels set correctly. The disease is certainly worse than the remedy in this case and each instance is annotated with the reason it is disabled, and we can reenable them in calmer waters * Fix unused ns Co-authored-by:
Youngho Kim <miku@korea.ac.kr> Co-authored-by:
dan sutton <dan@dpsutton.com>
-
- Dec 09, 2021
-
-
adam-james authored
* Added URL to test for repro of issue #14491 * Allow alphanumeric and _ characters in domains. Hostnames and domains are not the same. Hostnames have the restriction of alphanum, hyphen, and must start with an alpha char. This is what our regex seemed to match before. Relevant RFC for domain names: [https://www.rfc-editor.org/rfc/rfc2181#section-11](RFC2181 Section 11) Relevant RFC for hostnames: [https://www.ietf.org/rfc/rfc1123.html#section-2.1](RFC1123 Section 2.1) It seems that our url? regex is for hostnames, and is based on the UrlValidator code from apache commons. Though the issue #14491 author brings up the RFC1034 (which links to RFC1123 with a bit of a spec) to indicate that underscores SHOULD be allowed, they aren't really in that case. But it seems that there's wiggle room in allowing it for 'domains' generally, and perhaps we should permit the use of underscores in this URL validation to allow for more configuration options for our users. A stackoverflow page with helpful context and explanantion which helped me think about this: https://stackoverflow.com/questions/2180465/can-domain-name-subdomains-have-an-underscore-in-it * Fixed spacing
-
- Dec 08, 2021
-
-
Howon Lee authored
Combo type and multiple cards for static viz BE. These go into one FE endpoint but are two really separate things in BE. This one conforms to the FE type but the price is that the previous more-dynamic BE types needed to be changed to conform: this will require a refactoring to whack the js-viz types that already exist, when the FE is refactored also.
-
- Dec 07, 2021
-
-
dpsutton authored
* Only select type information on type inference of coalesce This previously just selected hte inferred information from the second argument to the coalesce. For fields this would return the entire field object which includes an id. Later, in `source-metadata->fields` this assumes that since an id is present, we should select `[:field id nil]`, but this confused that the type of the coalesce column is the same as that field-id, not that we want to select that field id. ```clojure add-source-metadata=> (pprint (mbql-source-query->metadata {:source-table 2 :joins [{:fields [[:field 22 {:join-alias "People - User"}] [:field 15 {:join-alias "People - User"}]] :source-table 3 :condition [:= [:field 3 nil] [:field 22 {:join-alias "People - User"}]] :alias "People - User"}] :expressions {:coalesce [:coalesce [:field 3 nil] [:field 22 {:join-alias "People - User"}]]} :aggregation [[:count]] :breakout [[:expression "coalesce"]]})) ({:semantic_type :type/FK, :table_id 2, :coercion_strategy nil, :name "coalesce", ;; merging maintains the correct name :settings nil, :field_ref [:expression "coalesce"], ;; merging maintains the field ref :effective_type :type/Integer, :parent_id nil, :id 3, ;; the type inference selected ;; the entire field, including its id :display_name "coalesce", :fingerprint {:global {:distinct-count 929, :nil% 0.0}}, :base_type :type/Integer} {:name "count", :display_name "Count", :base_type :type/BigInteger, :semantic_type :type/Quantity, :field_ref [:aggregation 0]}) nil ``` updating to only select the type information: ```clojure add-source-metadata=> (pprint (mbql-source-query->metadata {:source-table 2 :joins [{:fields [[:field 22 {:join-alias "People - User"}] [:field 15 {:join-alias "People - User"}]] :source-table 3 :condition [:= [:field 3 nil] [:field 22 {:join-alias "People - User"}]] :alias "People - User"}] :expressions {:coalesce [:coalesce [:field 3 nil] [:field 22 {:join-alias "People - User"}]]} :aggregation [[:count]] :breakout [[:expression "coalesce"]]})) ({:name "coalesce", :display_name "coalesce", :base_type :type/Integer, :effective_type :type/Integer, ;; no more field information beyond ;; the types :coercion_strategy nil, :semantic_type :type/FK, :field_ref [:expression "coalesce"]} {:name "count", :display_name "Count", :base_type :type/BigInteger, :semantic_type :type/Quantity, :field_ref [:aggregation 0]}) ``` * Update question in repro now that it doesn't fail on the backend previously this question would actually fail on the backend and independently the frontend would go into a stack overflow killing the page. Fixing the bug actually broke the test as it asserted that the query would break and look for the error message. Now that the query returns assert we can find the name of the question "test question" * Correct same bug for case When inferring type for a case, we were returning the type information for the first field we could find in the case statement. And that information included the field id. We always merge the current column's name on top of that to remove the analyzed field's name and other things, but the field-id would remain. `source-metadata->fields` in add_implicit_clauses will add a field reference if it finds a field id. Before, the following was returned ```clojure ({:description "The date and time an order was submitted." :semantic_type :type/CreationTimestamp :table_id 2 :coercion_strategy nil :unit :month :name "CREATED_AT" :settings nil :source :breakout :field_ref [:field 4 {:temporal-unit :month}] :effective_type :type/DateTime :parent_id nil :id 4 :visibility_type :normal :display_name "Created At" :fingerprint {:global {:distinct-count 9998 :nil% 0.0} :type {:type/DateTime {:earliest "2016-04-30T18:56:13.352Z" :latest "2020-04-19T14:07:15.657Z"}}} :base_type :type/DateTime} {:description "The raw, pre-tax cost of the order. Note that this might be different in the future from the product price due to promotions, credits, etc." :semantic_type :type/Quantity :table_id 2 :coercion_strategy nil :name "distinct case" :settings nil :source :aggregation :field_ref [:aggregation 0] :effective_type :type/Float :parent_id nil :id 6 :visibility_type :normal :display_name "distinct case" :fingerprint {:global {:distinct-count 340 :nil% 0.0} :type {:type/Number {:min 15.691943673970439 :q1 49.74894519060184 :q3 105.42965746993103 :max 148.22900526552291 :sd 32.53705013056317 :avg 77.01295465356547}}} :base_type :type/BigInteger}) ``` The important bit above is that the metadata for the "distinct case" has the correct field_ref but also has a field id. After, we now return the following metadata: ```clojure ({:description "The date and time an order was submitted." :semantic_type :type/CreationTimestamp :table_id 2 :coercion_strategy nil :unit :month :name "CREATED_AT" :settings nil :source :fields :field_ref [:field 4 {:temporal-unit :default}] :effective_type :type/DateTime :parent_id nil :id 4 :visibility_type :normal :display_name "Created At" :fingerprint {:global {:distinct-count 9998 :nil% 0.0} :type {:type/DateTime {:earliest "2016-04-30T18:56:13.352Z" :latest "2020-04-19T14:07:15.657Z"}}} :base_type :type/DateTime} {:base_type :type/BigInteger :effective_type :type/Float :coercion_strategy nil :semantic_type :type/Quantity :name "distinct case" :display_name "distinct case" :source :fields :field_ref [:field "distinct case" {:base-type :type/BigInteger}]} {:semantic_type :type/Quantity :coercion_strategy nil :name "cc" :expression_name "cc" :source :fields :field_ref [:expression "cc"] :effective_type :type/Float :display_name "cc" :base_type :type/Float}) ``` * Add tests looking for full field analysis without id
-
- Dec 06, 2021
- Dec 03, 2021
-
-
Jeff Evans authored
* Enable Postgres driver to use secrets for configuring SSL parameters Adding secret related properties for SSL options to Postgres driver Move `conn-props->secret-props-by-name` to secret.clj since it's needed directly there, too Add `us-east-2-bundle.pem` to `test-resources/certificates` for testing Postgres with SSL connectivity to our RDS instance (and a README.md explaining how it differs from the existing `ssl` directory) Updating `value->file!` to have better logic for building the error message when the existing file is not found (for better UX from the Database admin page) Updating frontend to support the `visible-if` value being an array, in which case any value may match the current details value, in order for that field to be visible Adding secret related properties for SSL options to Postgres driver Updating CircleCI config.yml to refer to the RDS instance when running Postgres SSL test Implement server side expansion of transitive visible-if dependency chain Update shouldShowEngineProvidedField on client to consider multiple key/value pairs in the visible-if map, and only show the field if ALL are true Adding new test to confirm transitive visible-if resolution, and cycle detection Add Cypress test for SSL field visibility
-
Noah Moss authored
-
Noah Moss authored
-
Noah Moss authored
-
Noah Moss authored
-
- Dec 02, 2021
- Dec 01, 2021
-
-
Noah Moss authored
-
- Nov 30, 2021
-
-
Jeff Evans authored
* Limit autocomplete results for tables and fields Add limit of 50 total fields or tables returned from the autocomplete_suggestions API, which can be a mixture of tables and/or fields Adding test that confirms results are limited properly
-
- Nov 28, 2021
-
-
Howon Lee authored
BE for progress bar for static viz, sasha put in the FE bits a bit ago
-
- Nov 23, 2021
-
-
Noah Moss authored
-
Cam Saul authored
New dashboard query endpoints and consolidate Dashboard API/Public/Embed parameter resolution code (#18994) * Code cleanup * Make the linters happy * Add pivot version of the new endpoints * implement usage of the new dashboard card query endpoint (#19012) * add new endpoints to services.js * replace CardApi.query with the new endpoint + pass dashboardId * add parameter id to parameter object found on query body * run dashchards using card query endpoint when they're new/unsaved on dashboard * Update endpoint references in e2e tests * Remove count check from e2e test We can make sure a double-mapped parameter filter shows results from both fields, but unfortunately with the new endpoint, the results don't seem to work anymore. I think that's OK? Maybe? * skip corrupted filters test the query endpoint now results in a 500 error caused by a schema mismatch * fix a few e2e intercepted request mismatches * unskip filter corruption test and remove the part that no longer works Co-authored-by:
Dalton <daltojohnso@users.noreply.github.com>
-
- Nov 22, 2021
-
-
Noah Moss authored
-