EzDevInfo.com

scalikejdbc

A tidy SQL-based DB access library for Scala developers. This library naturally wraps JDBC APIs and provides you easy-to-use APIs. ScalikeJDBC

accessing JSONB values using scalikejdbc-async

We are evaluating using scalikejdbc-async for a new play project. The new postgresql-9.4 features: the jsonb and its indexing seem very attractive and so does the scalikejdbc-async.

Is there a way to access JSONB values using scalikejdbc-async, and if there is not, how hard would it be to add ?

Thank you.


Source: (StackOverflow)

In clauses with sql interpolation

Can I use in-clauses with ScalikeJDBC's SQL Interpolation? e.g.

val ids = Set(1,2,3,5)
sql"""update foo set bar=${bar} where id in ${ids}""".update().apply()

This fails because ids is not interpolated.

sql"""update foo set bar=${bar} where id in (${ids.mkString(",")})""".update().apply()

This also fails because the expression is intepreted as a String, not a list of numbers. e.g. ... where id in ('1,2,3,5')


Source: (StackOverflow)

Advertisements

Left JOIN with AND clause in ON with scalikejdbc

So I have this sql(part of a much larger query):

from Person p left join ForeignCredentials fc on fc.person_id = p.id and fc.type = 'FACEBOOK'

and I'm trying to represent this in scalalikejdbc like this:

select.from(Person as p).leftJoin(ForeignCredential as fc).on(fc.`person_id`, p.id)

But I can't figure out how to ad the extra condition. The intuitive way would be:

    select.from(Person as p).leftJoin(ForeignCredential as fc).on(fc.`person_id`, p.id)
.and.eq(fc.`type`, "FACEBOOK").

So how do I do it?


Source: (StackOverflow)

For scalikejdbc how to write QueryDSL with a foreign key constraint

I use scalikejdbc 2.0.1 and playframework 2.3. I followed the instruction of One-to-many http://scalikejdbc.org/documentation/one-to-x.html, but there is still an error:

My data model is, one Account with many Todo's:

Todo model:

case class Todo (id: Long, value: String, userId:Option[Long] = None, users: Option[Account] = None){}
object Todo extends SQLSyntaxSupport[Todo]{
    val todo = syntax("todo")
    override val tableName = "todo"
    private val auto = AutoSession
    def opt(m: ResultName[Todo])(rs: WrappedResultSet) = rs.longOpt(m.id).map(_ => Todo(m)(rs))
    apply(todo.resultName)(rs)
    def apply(a: ResultName[Todo])(rs: WrappedResultSet): Todo = new Todo(
        id         = rs.long(todo.id),
        userId     =rs.longOpt(todo.userId),
        value      = rs.string(todo.value)
        )

    def apply(m: ResultName[Todo], a: ResultName[Account])(rs: WrappedResultSet) =  {
        apply(m)(rs).copy(users = rs.longOpt(a.id).map(_ => Account(a)(rs)))
    }


}

The model Account is:

case class Account(id: Int, email: String, password: String, name: String, permission: Role,todos:Seq[Todo]=Nil)

object Account extends SQLSyntaxSupport[Account] {
...
    val (a, t) = (Account.syntax, Todo.syntax)
    val accounts: List[Account] = withSQL {
    select.from(Account as a).leftJoin(Todo as t).on(a.id,t.userId)
    }.one(Account(a))
    .toMany(Todo.opt(t))
    .map { (account, todos) => account.copy( todos = todos) }
    .list.apply()
    }
}

I get the error is:

[error] G:\testprojects\mifun\app\models\Todo.scala:23: overloaded method apply
needs result type
[error]     apply(m)(rs).copy(users = rs.longOpt(a.id).map(_ => Account(a)(rs)))
[error]          ^
[error] G:\testprojects\mifun\app\models\Account.scala:53: type mismatch;
[error]  found   : scalikejdbc.QuerySQLSyntaxProvider[scalikejdbc.SQLSyntaxSuppo
rt[models.Todo],models.Todo]
[error]  required: scalikejdbc.ResultName[models.Todo]
[error]     (which expands to)  scalikejdbc.ResultNameSQLSyntaxProvider[scalikej
dbc.SQLSyntaxSupport[models.Todo],models.Todo]
[error]     .toMany(Todo.opt(t))
[error]                      ^
[error] two errors found
[error] (compile:compile) Compilation failed

I have two questions:

1, why I can not use toMany? I want to use ResultNameSQLSyntaxProvider, how to change the opt function I wrote?

2, what rs type should give on Todo.scala:23?


Source: (StackOverflow)

batch insert in scalikejdbc is slow on remote computer

I am trying to insert to a table in bulk of 100 ( i heard it's the best size to use with mySQL), i use scala 2.10.4 with sbt 0.13.6 and the jdbc framework i am using is scalikejdbc with Hikaricp , my connection settings look like this:

val dataSource: DataSource = {
  val ds = new HikariDataSource()
  ds.setDataSourceClassName("com.mysql.jdbc.jdbc2.optional.MysqlDataSource");
  ds.addDataSourceProperty("url", "jdbc:mysql://" + org.Server.GlobalSettings.DB.mySQLIP + ":3306?rewriteBatchedStatements=true")
  ds.addDataSourceProperty("autoCommit", "false")
  ds.addDataSourceProperty("user", "someUser")
  ds.addDataSourceProperty("password", "not my password")
  ds
}

ConnectionPool.add('review, new DataSourceConnectionPool(dataSource))

The insert code:

try {
  implicit val session = AutoSession
  val paramList: scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]] = scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]]()
  .
  .
  .
  for(rev<reviews){
  paramList += Seq[(Symbol, Any)](
            'review_id -> rev.review_idx,
            'text -> rev.text,
            'category_id -> rev.category_id,
            'aspect_id -> aspectId,
            'not_aspect -> noAspect /*0*/ ,
            'certainty_aspect -> rev.certainty_aspect,
            'sentiment -> rev.sentiment,
            'sentiment_grade -> rev.certainty_sentiment,
            'stars -> rev.stars
          )
  }
  .
  .
  .
  try {
    if (paramList != null && paramList.length > 0) {
        val result = NamedDB('review) localTx { implicit session =>
        sql"""INSERT INTO `MasterFlow`.`classifier_results`
        (
            `review_id`,
            `text`,
            `category_id`,
            `aspect_id`,
            `not_aspect`,
            `certainty_aspect`,
            `sentiment`,
            `sentiment_grade`,
            `stars`)
        VALUES
              ( {review_id}, {text}, {category_id}, {aspect_id},
              {not_aspect}, {certainty_aspect}, {sentiment}, {sentiment_grade}, {stars})
        """
          .batchByName(paramList.toIndexedSeq: _*)/*.__resultOfEnsuring*/
          .apply()
        }

Each time i insert a batch it took 15 seconds, my logs:

29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - Before cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - After cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:46 - DEBUG[default-akka.actor.default-dispatcher-3] StatementExecutor$$anon$1 - SQL execution completed

  [SQL Execution]
   INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
   INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
.
.
.
   INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
   ... (total: 100 times); (15466 ms)

  [Stack Trace]
    ...
    logic.DB.ClassifierJsonToDB$$anonfun$1.apply(ClassifierJsonToDB.scala:119)
    logic.DB.ClassifierJsonToDB$$anonfun$1.apply(ClassifierJsonToDB.scala:96)
    scalikejdbc.DBConnection$$anonfun$_localTx$1$1.apply(DBConnection.scala:252)
    scala.util.control.Exception$Catch.apply(Exception.scala:102)
    scalikejdbc.DBConnection$class._localTx$1(DBConnection.scala:250)
    scalikejdbc.DBConnection$$anonfun$localTx$1.apply(DBConnection.scala:257)
    scalikejdbc.DBConnection$$anonfun$localTx$1.apply(DBConnection.scala:257)
    scalikejdbc.LoanPattern$class.using(LoanPattern.scala:33)
    scalikejdbc.NamedDB.using(NamedDB.scala:32)
    scalikejdbc.DBConnection$class.localTx(DBConnection.scala:257)
    scalikejdbc.NamedDB.localTx(NamedDB.scala:32)
    logic.DB.ClassifierJsonToDB$.insertBulk(ClassifierJsonToDB.scala:96)
    logic.DB.ClassifierJsonToDB$$anonfun$bulkInsert$1.apply(ClassifierJsonToDB.scala:176)
    logic.DB.ClassifierJsonToDB$$anonfun$bulkInsert$1.apply(ClassifierJsonToDB.scala:167)
    scala.collection.Iterator$class.foreach(Iterator.scala:727)
    ...

When i run it on the server that host the mySQL database it's run fast, what can i do to make it run faster on a remote computer ?


Source: (StackOverflow)

How to Generating SQL dynamically

I would like to use this library only for generating sql without executing it. Can you please let me see good example how can i use SQLSytax in order just to generate. for example :

val query:String = //Use SQLSyntax

println(query)

res1: select * from TABLE where A = ?

val bindedParameters:List[String] = ....


Source: (StackOverflow)

ScalikeJDBC + SQlite: Cannot change read-only flag after establishing a connection

Trying to get working ScalikeJDBC and SQLite. Have a simple code based on provided examples:

import scalikejdbc._, SQLInterpolation._

object Test extends App {
  Class.forName("org.sqlite.JDBC")
  ConnectionPool.singleton("jdbc:sqlite:test.db", null, null)

  implicit val session = AutoSession

  println(sql"""SELECT * FROM kv WHERE key == 'seq' LIMIT 1""".map(identity).single().apply()))
}

It fails with exception:

Exception in thread "main" java.sql.SQLException: Cannot change read-only flag after establishing a connection. Use SQLiteConfig#setReadOnly and QLiteConfig.createConnection().
at org.sqlite.SQLiteConnection.setReadOnly(SQLiteConnection.java:447)
at org.apache.commons.dbcp.DelegatingConnection.setReadOnly(DelegatingConnection.java:377)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.setReadOnly(PoolingDataSource.java:338)
at scalikejdbc.DBConnection$class.readOnlySession(DB.scala:138)
at scalikejdbc.DB.readOnlySession(DB.scala:498)
...

I've tried both scalikejdbc 1.7 and 2.0, error remains. As sqlite driver I use "org.xerial" % "sqlite-jdbc" % "3.7.+".

What can I do to fix the error?


Source: (StackOverflow)

Why does ScalikeJdbc require an Execution Context when it has a Thread Pool?

In this example an Execution Context is used to process the future.

Why is this used when Scalike has a built in connection pool?

Shouldn't the Future use one of the pool threads to execute? It seems like a real waste to ForkJoin a thread just to wait on the Future while another thread does the IO work.

http://scalikejdbc.org/documentation/transaction.html

object FutureDB {
  implicit val ec = myOwnExecutorContext
  def updateFirstName(id: Int, firstName: String)(implicit session: DBSession): Future[Int] = {
    Future { 
      blocking {
        session.update("update users set first_name = ? where id = ?", firstName, id)
      } 
    }
  }
  def updateLastName(id: Int, lastName: String)(implicit session: DBSession): Future[Int] = {
    Future { 
      blocking {
        session.update("update users set last_name = ? where id = ?", lastName, id)
      } 
    }
  }
}

object Example {
  import FutureDB._
  val fResult = DB futureLocalTx { implicit s =>  
    updateFirstName(3, "John").map(_ => updateLastName(3, "Smith"))
  }
}

Example.fResult.foreach(println(_))

Source: (StackOverflow)

ScalikeJDBC won't connect to NamedDB for DSL queries in ScalaTest test cases

I'm having a heck of a time using a test database for my ScalaTest test cases, as shown in the documentation examples.

I have a default database, and a testdb database, and my Spec looks like

class JobSpec extends FlatSpec with AutoRollback {
  DBsWithEnv("test").setup('testdb)
  override def db = NamedDB('testdb).toDB

  override def fixture(implicit session:DBSession) = {
    User.insert("test_user")
  }

  it should "create a new user" in { implicit session: DBSession =>
    User.sqlFind("test_user") //succeeds
    User.dslFind("test_user") //fails
  }
}

It seems that my queries using the sql work, but the ones using the dsl do not. The DSL queries error, trying to access the 'default database, but the sql queries correctly use the 'testdb database. Here's the error

Connection pool is not yet initialized.(name:'default)
java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'default)

Here's the User class

case class User(name: String)
object User extends SQLSyntaxSupport[User] {
  def apply(u: SyntaxProvider[User])(rs: WrappedResultSet) = apply(u.resultName)(rs)
  def apply(j: ResultName[User])(rs: WrappedResultSet) = new User(rs.get(u.name))

  override val tableName = "users"
  val u = User.syntax("u")

  def dslFind(name: String)(implicit session: DBSession) = 
    withSQL {
      select.from(User as u).where.eq(u.name, name)
    }.map(User(u)).single().apply()
  def sqlFind(name: String)(implicit session: DBSession) = 
    sql""" select (name) from users where name = $name;"""
      .map(rs => new User(rs.string(1)).single().apply()
}

Anyone know why it is trying to use the default database instead of the testdb, when calling DSL-created queries? Thanks!


Source: (StackOverflow)

Accessing to PostgreSQL array via ScalikeJDBC

I try use ScalikeJDBC for access to array in PostgreSQL 9.4. DDL:

create table itab (
        code varchar primary key,
        group_list varchar[]
);

A simple case class and loader are defined in Scala application.

case class Item(code: String, groupSet: List[String])

trait loader {
  def loadAllItems: List[Item] = {
      insideReadOnly { implicit session =>
                       sql"select CODE, GROUP_LIST from ITAB"
                       .map(e => Item(
                           e.string("code"),
                           e.array("group_list").asInstanceOf[Buffer[String]]
                        )).list.apply()
                     }
  }
}

When I run an application I get runtime exception

java.lang.ClassCastException: org.postgresql.jdbc4.Jdbc4Array cannot be cast to scala.collection.mutable.Buffer

How can I resolve it? Thanks. Hoviman.


Source: (StackOverflow)

AutoRollback doesn't rollback

After I run the following spec, the table exists. I expected it to never be present as it should only exist within the eventually rolled-back transaction.

import org.specs2.mutable.Specification
import scalikejdbc.{DB, NamedDB}
import scalikejdbc.specs2.mutable.AutoRollback

class MyQuerySpec extends Specification with ArbitraryInput {

  sequential

  DBs.setup('myDB)

  "creating the table" in new AutoRollback {
    override def db(): DB = NamedDB('myDB).toDB()
    private val tableName = s"test_${UUID.randomUUID().toString.replaceAll("-", "_")}"
    private val query = new MyQuery(tableName)

    query.createTable
    ok
  }
}

The line DBs.setup('myDB) is not part of the examples. But if I remove it I get the exception java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'myDB)

The source of MyQuery.create:

SQL(s"DROP TABLE IF EXISTS $tableName").execute().apply()
SQL(s"""
     |CREATE TABLE $tableName (
     |  id               bigint PRIMARY KEY
     |)""".stripMargin).execute().apply()

Config:

db {
  myDB {
    driver = "org.postgresql.Driver"
    url = "****"
    user = "****"
    password = "****"
    poolInitialSize = 1
    poolMaxSize = 300
    poolConnectionTimeoutMillis = 120000
    poolValidationQuery = "select 1 as one"
    poolFactoryName = "commons-dbcp2"
  }
}

ScalikeJDBC v2.2.9


Source: (StackOverflow)

Join on two foreign keys from same table in scalikejdbc

So i have a one table that has two FK that points at same table.

For example:

Message table with columns sender and receiver that both references id in user table.

When i'm writing query to fetch message and join on both the result is same use for both, the first one.

Here is how i'm trying to do it.

import scalikejdbc._

Class.forName("org.h2.Driver")
ConnectionPool.singleton("jdbc:h2:mem:hello", "user", "pass")

implicit val session = AutoSession

sql"""
create table members (
  id serial not null primary key,
  name varchar(64),
  created_at timestamp not null
)
""".execute.apply()

sql"""
create table message (
  id serial not null primary key,
  msg varchar(64) not null,
  sender int not null,
  receiver int not null
)
""".execute.apply()

Seq("Alice", "Bob", "Chris") foreach { name =>
  sql"insert into members (name, created_at) values (${name}, current_timestamp)".update.apply()
}

Seq(
    ("msg1", 1, 2),
    ("msg2", 1, 3),
    ("msg3", 2, 1)
) foreach { case (m, s, r) =>
  sql"insert into message (msg, sender, receiver) values (${m}, ${s}, ${r})".update.apply()
}

import org.joda.time._
case class Member(id: Long, name: Option[String], createdAt: DateTime)
object Member extends SQLSyntaxSupport[Member] {
  override val tableName = "members"
  def apply(mem: ResultName[Member])(rs: WrappedResultSet): Member = new Member(
    rs.long("id"), rs.stringOpt("name"), rs.jodaDateTime("created_at"))
}

case class Message(id: Long, msg: String, sender: Member, receiver: Member)
object Message extends SQLSyntaxSupport[Message] {
    override val tableName = "message"
  def apply(ms: ResultName[Message], s: ResultName[Member], r: ResultName[Member])(rs: WrappedResultSet): Message = new Message(
    rs.long("id"), rs.string("msg"), Member(s)(rs), Member(r)(rs))
}

val mem = Member.syntax("m")
val s = Member.syntax("s")
val r = Member.syntax("r")
val ms = Message.syntax("ms")
val msgs: List[Message] = sql"""
  select * 
  from ${Message.as(ms)}
  join ${Member.as(s)} on ${ms.sender} = ${s.id}
  join ${Member.as(r)} on ${ms.receiver} = ${r.id}
  """.map(rs => Message(ms.resultName, s.resultName, r.resultName)(rs)).list.apply()

Am I doing something wrong or is it bug?


Source: (StackOverflow)

value withSQL not found

still new in the scalikeJDBC usage, im having this error when compiling my code which was generated from the "reverse engineering" of mysql Db with scalikejdbc 1.7.7 "not found: value withSQL"

any idea?

def find(id: String)(implicit session: DBSession = autoSession): Option[EmployeeBasicDetail] = {
withSQL { 
  select.from(EmployeeBasicDetail as ebd).where.eq(ebd.id, id)
}.map(EmployeeBasicDetail(ebd.resultName)).single.apply()

}

from the method above the compiler points to function "withSQL" not found: value withSQL

thanks


Source: (StackOverflow)

Is there an example of using an hstore data structure with scalikejdbc and postgres?

We have a use case where using an hstore data structure inside a table would be very helpful for solving a problem with our current data model. Our current setup is with postgres and scalikejdbc. The problem is that there seems to be no documentation on how this would be done, though indications are that it is supported with the latest JDBC drivers.

Are there any examples of using a hstore data type with postgres and scalikejdbc?


Source: (StackOverflow)

ScalikeJdbc Multiple Insert

How do we perform multiple inserts in the same transaction?

  def insertData(dataList: List[Data])(implicit session: DBSession = autoSession) = {

    // todo: this is probably opening and closing a connection every time?
    dataList.foreach(data => insertData(data))
  }

  def insertData(data: Data) = withSQL {
    val t = DataTable.column
    insert.into(DataTable).namedValues(
      d.name -> data.name,
      d.title -> data.title
    )
  }.update().apply()

It would not be efficient to have a different transaction for every insert if these numbered in the thousands and up.

http://scalikejdbc.org/documentation/operations.html


Source: (StackOverflow)