JDBC improvements and convenience method suggestions

Lukas Eder lukas.eder at gmail.com
Fri Jul 13 07:50:26 UTC 2018

Mark, Vladimir,

Thanks already for your feedback. Some comments inline. I've copied all of
your mails below, sequentially

*Am Do., 12. Juli 2018 um 20:26 Uhr schrieb Mark Rotteveel
<mark at lawinegevaar.nl <mark at lawinegevaar.nl>>:*

> > 1. Add java.sql.Freeable and let Array, Blob, Clob, and SQLXML extend it
> I'm slightly hesitant about introducing this new interface. Making those
> interfaces directly implement `AutoCloseable` with default methods
> calling `free()` may lead to a little bit of code duplication, but
> removes the need for an additional interface whose only real function is
> to perform as a bridge between `close()` and `free()`.

Sure, I have no real opinion on either approach.

> > 2. Add PreparedStatement.set(Object...)
> I think that this should require that the provided parameters should
> include **all** parameters.

So if there are three parameters, then
> `set(..)` must provide 3 parameters. If you provide 2 parameters, that
> should result in an SQLException.

Although while this was my intention, I'm undecided on whether this should
really be the case. I can definitely see setting all values and then
overwriting 1-2 values in a subsequent call, so why not setting N-1 values
and then "overwriting" the Nth value.

But I could definitely live with the specification of having to provide all

> > 4. Add Result.get():Object[]
> Why those choice for an array and not for example a List?

I don't have an opinion on the concrete type and I thought it would be too
early to discuss the concrete type. I think we can agree that there will be
a good choice, and I also think that it will not be Object[]. See it as a

> > 5. Let ResultSet extends Iterable<Object[]>
> >
> https://github.com/lukaseder/openjdk/commit/7daefde1134099baa1923c3f120789069c5f6917
> I'm not sure I like that UncheckedSQLException, especially not as a
> nested class.

Don't worry about nesting classes. I showed you a draft of what I had in
mind, without spending too much time on correct location, appropriate
naming, documentation of classes.

Fact is, however, *if* ResultSet should communicate with Iterable /
Iterator / Stream, then unchecked exceptions will be necessary.

> A `ResultSet` is not a `List`, nor a `Collection` (especially not for a
> `TYPE_FORWARD_ONLY`), we should not try to bolt on that such behavior as
> that would make use of `ResultSet` rather hard to implement correctly
> and efficiently, especially some of the other requirements for
> `Collection`, for example `equals`, `hashCode`, `contains`, etc.

I don't see why a ResultSet isn't a List *conceptually*, although as I
said, I would probably also refrain from exposing that type relationship so
bluntly, as it would be abused. However, making jOOQ's
org.jooq.Result<Record> extend List<Record> was a design decision that I've
never regretted and which has added so much value to the Result type, if
the assumption is that results are fetched eagerly most of the time. In
jOOQ, there are two types for this:

- Eager fetched Result<Record> (ResultSet already closed when it is
- Lazy fetched Cursor<Record> (ResultSet still open while it is being

Both of these types can be streamed and iterated very conveniently, with
slightly different performance characteristics. Result also allows random
access to individual rows and for adding new rows to it, e.g. when used as
an input table for the SQL VALUES() clause:

    SELECT * FROM (VALUES (1, 2), (3, 4)) t(a, b)

The above 2x2 table could have been a Result that has been constructed by
the user. But this discussion is probably too far fetched, I wanted to
focus on the immediately useful first.

> > Makes me wish for a for-with-resources as well that iterates over
> > (Iterable<T> & AutoCloseable) and closes the object after (un)successful
> > iteration :-)
> That might indicate that instead of a `Iterator`, we should add
> `stream()`, as a `Stream` does implement `Closeable` and would achieve
> this as well.

Stream does not extend Iterable, so the goals would be different. Some
insight by Brian Goetz:

I can definitely see the utility of a ResultSet.stream():Stream<Object[]>
method, though. I don't see how this conflicts with ResultSet extends
Iterable<Object[]>, except, perhaps, in terms of being cautious with
respect to forward compatibility - see again Brian Goetz's comments

> > 7. Add Connection.transaction() overloads for functional transaction
> usage
> >
> https://github.com/lukaseder/openjdk/commit/12c9591ca9066e676001fd6e67e18b7d444d0a32
> I don't really like the idea of nested classes in the API.

Don't worry about those. Again, just the path of least resistance to get
something to compile and discuss about.

> I don't really see the need for `TransactionRunnable` / `SQLRunnable` and
> `TransactionalSupplier` / `SQLSupplier`. The only advantage seems to be
> that you can use () -> .. instead of having an explicit lambda argument.

This is about call site convenience. Having method overloads accepting
void-compatible and value-compatible lambdas is mere convenience. The
void-compatible overload could be stripped, causing only a minor annoyance.
I've found those overloads extremely useful in jOOQ.

The overloads accepting no-args lambdas are also mere convenience for cases
where capturing the Connection from some outer scope is possible. Having
two Connection references in scope is a bit tedious, especially because the
lambda argument must not hide the outer scope's variable name. However, the
most correct and useful overload is the one accepting a value-compatible,
single-arg (Connection connection) -> T lambda, as it is the only one
accepting pure functions.

So, perhaps the overloads are just distracting from the actual feature

> This default implementation assumes savepoint support, but savepoints
> are an optional JDBC feature, we should not rely on optional features in
> default implementations. I would suggest that a default implementation
> should not use savepoints, but instead commit a current transaction (if
> not in auto commit), and commit/rollback at the end.

Sure, that could definitely be supported as well, and then there could be
flags to enable / disable one or the other behaviour. I would still default
to supporting nested transactions and risk throwing exceptions once
transactions actually *are* nested (if the driver doesn't support
savepoints), because nesting isn't the standard use-case.

*Am Do., 12. Juli 2018 um 22:21 Uhr schrieb Mark Rotteveel
<mark at lawinegevaar.nl <mark at lawinegevaar.nl>>:*

> On 12-7-2018 17:43, Lukas Eder wrote:
> > 3. Add Connection.executeXYZ() methods
> As Michael Rasmussen pointed out on Twitter in
> https://twitter.com/jmichaelras/status/1017438847309926401, the
> `executeQuery` variants will not work because the result set will be
> closed by closing the statement. Instead I think that it should be
> implemented as
>      /**
>       * Create a statement and execute some SQL on it.
>       */
>      default ResultSet executeQuery(String sql) throws SQLException {
>          Statement s = createStatement();
>          try {
>              s.closeOnCompletion()
>              return s.executeQuery(sql);
>          } catch (SQLException e) {
>              // Safeguard against lack of closeOnCompletion support
>              try {
>                  s.close();
>              } catch (SQLException suppressed) {
>                  e.addSuppressed(suppressed);
>              }
>              throw e;
>          }
>      }
> (+ similar for` executeQuery(String sql, Object... binds)`)

Yes, absolutely. This was an implementation oversight on my side. I had
previously prepared an implementation that would proxy the resulting
ResultSet and close the Statement upon ResultSet.close(). That
implementation somehow got lost.

Note, I had also suggested Connection.execute(String):boolean, which does
not make sense. Nothing can be done about this method returning true. I
will fix these two things.

However, this could still leak a statement resource if
> `closeOnCompletion` is ignored without actual support. Maybe this is an
> indication that instead it should be a more functional construct like:
> default void executeQuery(String sql, SQLConsumer<ResultSet> rsConsumer) {
>      try (Statement s = createStatement();
>          ResultSet rs = s.executeQuery(sql)) {
>          rsConsumer.consume(rs);
>      }
> }
> But that immediately raises the question whether or not we should
> introduce similar constructs for the executeXXX methods on Statements, etc.

That's definitely a very interesting alternative suggestion worth
exploring! Especially since we're already discussing such functional API
for transactions and data source usage.
My suggestions just attempt to simplify existing procedural usage, so both
options might be reasonable.

*Am Do., 12. Juli 2018 um 23:02 Uhr schrieb Vladimir Sitnikov
<sitnikov.vladimir at gmail.com <sitnikov.vladimir at gmail.com>>:*

> >      "INSERT INTO t VALUES (?, ?, ?)")) {
> >      s.set(1, 2, 3);
> >I think that this should require that the provided parameters should
> >.include **all** parameters
> How should it handle nulls?

Just like PreparedStatement.setObject(int, Object)

Many drivers can derive the type of the null bind value at prepare time
and/or at runtime. If you're using e.g. DB2 or Derby, whose NULL type
recognition is less powerful than e.g. Oracle's, people are already used to
casting bind variables as such:


I agree that some drivers cannot do much magic here due to the restrictions
of the database they connect to, but this is already the case today, and it
is really more of an implementation problem than an API specification
problem. In a perfect world that does not leak abstractions, it should be
totally possible to call

    pstmt.setObject(1, null);

And not worry a second about server side data types.

Of course, an additional PreparedStatement.set(Object[] params, int[]
types) could be added as well, if that's really valuable...

I'm fine if it throws in case null arguments are present, however it would
> limit the use of the feature.
> SQL deals with typed nulls, while Java has no way to tell if null is of
> numeric or text type.
> Consider { call procedure(?) } when there are two overloads (text and
> numeric argument). The type of null enables to pick the proper overload.

Note, I deliberately left out CallableStatements in general, because they
require registering out parameters and actually passing out parameters to
the client. A convenience API is not easy to design for this use-case.

> Lukas> 6. Add SQLInput.getConnection() and SQLOutput.getConnection()
> Makes sense at least for SQLOutput.
> Current SQLOutput has writeArray(Array) method, however it is absolutely
> not clear where to get the Array in the first place from.
> The downside of having SQLOutput.getConnection is it would block
> "streaming" implementation of SQLOutput.

Why would it? OracleSQLOutput (the only implementation I'm aware of) has a
property conn:OracleConnection, so this is a simple property access in that
implementation. The same is true for OracleJdbc2SQLInput

> I'm not sure what is the value of SQLInput.getConnection().

1. API regularity. I never like it when two dual APIs are not consistent.
2. Access to methods like c.getCatalog(), c.getSchema(), c.getClientInfo(),
c.getTypeMap() (this is required to read nested SQLData) or vendor specific
3. If you want to read vendor specific data types, like Oracle's
TIMESTAMPTZ, the only way I've found to work is to call

So, this is definitely a good thing to have available. And again, it's
already there in Oracle, it just needs to be exposed publicly.

> For instance, the following works in PostgreSQL
> [...]
> And the question is how one could create baz.mood[] array vs
> public."baz.mood"[] one.
> What should one pass to the elementTypeName?
> something like createArrayOf(SQLType elementType, Object[] elements) might
> be a good idea to have proper type qualifiers.
> It looks like createNamedArrayOf(SQLType arrayType, Object[] elements)
> could be useful when DB uses named arrays.

I don't know. The official PostgreSQL JDBC driver is notoriously incomplete
in this area. jOOQ just serialises all sorts of vendor-specific PostgreSQL
types as strings and reads them as well as strings. That's the only way I
ever got these things to work on PostgreSQL.


More information about the jdbc-spec-discuss mailing list