This project is mirrored from https://github.com/metabase/metabase.
Pull mirroring updated .
- Aug 04, 2022
-
-
metamben authored
Breaking date values into parts should happen in the time zone expected by the user and the construction of the truncated date should happen in the same timezone. This is especially important when the difference between the expected timezone and UTC is not an integer number of hours and the truncation resolution is hour. (See #11149).
-
Nick Fitzpatrick authored
-
metamben authored
-
metamben authored
-
Ryan Laurie authored
* Dashboard Card String Filters are now case-insensitive This PR is a draft because while it solves the problem of string filters being case sensitive, it doesn't necessarily do it in the best way. This isn't necessarily a bug, but it seems that there is no way for the frontend to set the :case-sensitive true/false option anyway. For the purposes of this initial attempt at a solution, I have modified the endpoint ` POST "/:dashboard-id/dashcard/:dashcard-id/card/:card-id/query"` to automatically include an option map containing :case-sensitive false. The machinery to take this option into consideration already exists with the default :sql ->honeysql function in the ` metabase.driver.sql.query-processor` namespace. See the `like-clause` private function in particular. Since the query processor is a bit opaque to me at present, I was unable to figure out if there is a proper way that the frontend could send an options map (or key-value pair) all the way through the qp middleware to the query building stage. I discovered that if you conj a map `{:case-sensitive false}` to the output of the `to-clause` function in `metabase.driver.common.parameters.operators`, you will get the desired case-insensitive behavior. So, I modified the to-clause function in this PR to appropriately conj an options map if one exists. What I'd like to know: - is there a super-simple way to pass an option in already that I just missed? (eg. I thought perhaps in the `[:field 13 nil]` that 'nil' could be an options map, but I couldn't get that to work for me) - is there a middleware approach that I should consider? - any other options to appropriately hanlde this? * Revert the endpoint. If the frontend sends an options map on an options key inside the parameter, this endpoint will pass that on, so no change is needed. * include parameter options in datasetQuery Co-authored-by:
Adam James <adam.vermeer2@gmail.com>
-
Cam Saul authored
* Fix SSH tunnel leaks when testing connections * Appease kondo =( * Add some more NOTES
-
Braden Shepherdson authored
These fields hold JSON, which can contain field IDs in a few places. Particularly nasty is the `:column_settings` subfield, which is a map whose *keys* are JSON strings with field IDs.
-
metamben authored
See https://github.com/metabase/metabase/issues/22867 for context.
-
Natalie authored
-
Jeff Bruemmer authored
-
Alexander Polyankin authored
-
Aleksandr Lesnenko authored
-
dpsutton authored
If you click "view sql" on a cached model you see the substituted query rather than the "real" query. The query based on cache: ```sql SELECT "source"."login" AS "login", "source"."count" AS "count" FROM (select "login", "count" from "metabase_cache_e449f_19"."model_4112_issue_assi") "source" LIMIT 1048575 ``` The real underlying query: ```sql SELECT "source"."login" AS "login", "source"."count" AS "count" FROM (SELECT "github_raw"."issue_events__issue__assignees"."login" AS "login", count(*) AS "count" FROM "github_raw"."issue_events__issue__assignees" GROUP BY "github_raw"."issue_events__issue__assignees"."login" ORDER BY "count" DESC, "github_raw"."issue_events__issue__assignees"."login" ASC) "source" LIMIT 1048575 ``` If you click the "convert to sql question" you will end up with a sql question that is not based on the original question but the Metabase managed table.
-
Ngoc Khuat authored
-
Alexander Polyankin authored
-
Dalton authored
-
Dalton authored
-
Dalton authored
* Update hasPermissionsToMap to check for existence of dataset_query * Add check to mapping-options fn, too * Add repro for #24536 * Uncomment and enable the repro Co-authored-by:
Nemanja <31325167+nemanjaglumac@users.noreply.github.com>
-
Alexander Polyankin authored
-
- Aug 03, 2022
-
-
Natalie authored
-
Noah Moss authored
* set CookieSpecs/STANDARD on Apache HttpClient for Snowplow * add a comment in the code pointing to the PR * set cookie spec via RequestConfig instead
-
Diogo Mendes authored
-
Jeff Bruemmer authored
* notes from Luiz * refresh cache * typo
-
Howon Lee authored
* rewrite here... * no bq * thinking * conjunction of alias thing... * make sure we dont wanna see it * nit * multi-arity func * fiddly bit on test * add bit to see order by works right * no default breakout true
-
Anton Kulyk authored
-
Alexander Polyankin authored
-
Alexander Polyankin authored
-
Jeff Bruemmer authored
-
Braden Shepherdson authored
This PR handles the other JSON-encoded MBQL snippets I was able to find. These snippets contain `Table` and `Field` IDs, and so are not portable. These fields are expanded during serialization and the IDs replaced with portable references, then converted back in deserialization. Note that the referenced field must already be loaded before it has a valid ID. `serdes-dependencies` defines this order, therefore each entity depends on those tables and fields referenced in its MBQL fragments. The complete set of fields I found to convert: - `Metric.definition` - `Segment.definition` - `DashboardCard.parameter_mappings` - `Card.parameter_mappings`
-
Noah Moss authored
* change implementation of `split-on-tags` to avoid using lookaround * fix regex
-
dpsutton authored
-
dpsutton authored
* Remove deprecated friend library - friend has two functions we used: bcrypt and bcrypt-verify. Easy to lift them into our own namespace with attribution - uses simple interop on org.mindrot.jbcrypt.BCrypt to achieve these - also brings in other stuff we don't need ``` com.cemerick/friend 0.2.3 X org.mindrot/jbcrypt 0.3m :use-top <- all we care about X org.clojure/core.cache 0.6.3 :superseded X org.clojure/data.priority-map 0.0.2 :parent-omitted . org.openid4java/openid4java-nodeps 0.9.6 X commons-logging/commons-logging 1.1.1 :older-version . net.jcip/jcip-annotations 1.0 . com.google.inject/guice 2.0 . aopalliance/aopalliance 1.0 ``` And we already declare a dependency on 0.4 of this lib ``` org.mindrot/jbcrypt 0.4 ``` This means we can remove openid4, google.inject/guice, aopalliance, etc and just keep using the same `BCrypt` java class we have been using this whole time. Behavior and classfiles are identical. So very low risk Want to call out a use of ```clojure (when-not api/*is-superuser?* (api/checkp (u.password/bcrypt-verify (str (:password_salt user) old_password) (:password user)) "old_password" (tru "Invalid password"))) ``` This has the same signature of an existing function in `u.password/verify-password`: ```clojure (defn verify-password "Verify if a given unhashed password + salt matches the supplied hashed-password. Returns `true` if matched, `false` otherwise." ^Boolean [password salt hashed-password] ;; we wrap the friend/bcrypt-verify with this function specifically to avoid unintended exceptions getting out (boolean (u/ignore-exceptions (bcrypt-verify (str salt password) hashed-password)))) ``` I did not replace it in this PR so that the diff is essentially `creds/<fn>` -> `u.password/<fn>` and very easy to structually see what is going on. But totally makes sense to clean up the usages of these in another pass * sort ns * simple tests
-
Diogo Mendes authored
-
Nemanja Glumac authored
-
Nemanja Glumac authored
* Update the syntax * Rename filters * Add repro for #24500 * Skip the repro * Rename file * Address comment and fix the syntax * Improve the test title
-
Jeff Bruemmer authored
-
Alexander Polyankin authored
-
- Aug 02, 2022
-
-
dpsutton authored
* Handle flakiness with geojson java.net.UnknownHostException errors In CI seems like we are getting errant errors: ```clojure geojson.clj:62 It validates URLs and files appropriately http://0xc0000200 expected: (valid? geojson) actual: #error { :cause "Invalid IP address literal: 0xc0000200" :via [{:type clojure.lang.ExceptionInfo :message "Invalid GeoJSON file location: must either start with http:// or https:// or be a relative path to a file on the classpath. URLs referring to hosts that supply internal hosting metadata are prohibited." :data {:status-code 400, :url "http://0xc0000200"} :at [metabase.api.geojson$valid_url_QMARK_ invokeStatic "geojson.clj" 62]} {:type java.net.UnknownHostException :message "0xc0000200" :at [java.net.InetAddress getAllByName "InetAddress.java" 1340]} {:type java.lang.IllegalArgumentException :message "Invalid IP address literal: 0xc0000200" :at [sun.net.util.IPAddressUtil validateNumericFormatV4 "IPAddressUtil.java" 150]}] ``` Not clear if this change has a hope of fixing it: if it doesn't resolve once its possible it is cached somewhere in the network stack, or it won't resolve if you ask again. But gonna give it a shot. Set the property `"networkaddress.cache.negative.ttl"` to `"0"` > networkaddress.cache.negative.ttl (default: 10) > Indicates the caching policy for un-successful name lookups from the name service. The value is specified as an integer to indicate the number of seconds to cache the failure for un-successful lookups. > A value of 0 indicates "never cache". A value of -1 indicates "cache forever". From https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/net/InetAddress.html in the hopes that we can try multiple times. Restores the original value after the test completes so we don't inadvertently change behavior elsewhere. If we get an error of java.net.UnknownHostException we try again if we have attempts remaining. If we get a boolean it means the ip resolution worked so we can rely on the response (checking if it resolves locally or not) * add a delay * comment out test
-
Ryan Laurie authored
-
dpsutton authored
* bump snowflake * Handle Types/TIMESTAMP_WITH_TIMEZONE for snowflake Snowflake used to return columns of this type as `Types/TIMESTAMP` = 93. The driver is registered with the legacy stuff ```clojure (driver/register! :snowflake, :parent #{:sql-jdbc ::sql-jdbc.legacy/use-legacy-classes-for-read-and-set}) ``` causing it to hit the following codepath: ```clojure (defmethod sql-jdbc.execute/read-column-thunk [::use-legacy-classes-for-read-and-set Types/TIMESTAMP] [_ ^ResultSet rs _ ^Integer i] (fn [] (when-let [s (.getString rs i)] (let [t (u.date/parse s)] (log/tracef "(.getString rs i) [TIMESTAMP] -> %s -> %s" (pr-str s) (pr-str t)) t)))) ``` But snowflake now reports the column as `Types/TIMESTAMP_WITH_TIMEZONE` = 2014 in https://github.com/snowflakedb/snowflake-jdbc/pull/934/files we no longer hit this string based path for the timestamp with timezone pathway. Instead it hits ```clojure (defn- get-object-of-class-thunk [^ResultSet rs, ^long i, ^Class klass] ^{:name (format "(.getObject rs %d %s)" i (.getCanonicalName klass))} (fn [] (.getObject rs i klass))) ,,, (defmethod read-column-thunk [:sql-jdbc Types/TIMESTAMP_WITH_TIMEZONE] [_ rs _ i] (get-object-of-class-thunk rs i java.time.OffsetDateTime)) ``` And `(.getObject ...)` blows up with an unsupported exception. ```java // @Override public <T> T getObject(int columnIndex, Class<T> type) throws SQLException { logger.debug("public <T> T getObject(int columnIndex,Class<T> type)", false); throw new SnowflakeLoggedFeatureNotSupportedException(session); } ``` There seem to be some `getTimetamp` methods on the `SnowflakeBaseResultSet` that we could call but for now I'm just grabbing the string and we'll parse on our side. One style note, it is not quite clear to me if this should follow the pattern established in `snowflake.clj` driver of setting `defmethod sql-jdbc.execute/read-column-thunk [:snowflake Types/TIMESTAMP_WITH_TIMEZONE]` or if it should follow the legacy pattern of `defmethod sql-jdbc.execute/read-column-thunk [::use-legacy-classes-for-read-and-set Types/TIMESTAMP]`. It seems like almost either would be fine. Just depends on whether other legacy drivers might like this fix which seems possible?
-