slick
Scala Language Integrated Connection Kit
Slick
How do you update multiple columns using Slick Lifted Embedding ? This document doesn't say much.
I expected it to be something like this
Query(AbilitiesTable).filter((ab: AbilitiesTable.type) => ab.id === ability_id).map((ab: AbilitiesTable.type) => (ab.verb, ab.subject)).update("edit", "doc")
Source: (StackOverflow)
I try to understand some Slick works and what it requires.
Here it an example:
package models
case class Bar(id: Option[Int] = None, name: String)
object Bars extends Table[Bar]("bar") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
// This is the primary key column
def name = column[String]("name")
// Every table needs a * projection with the same type as the table's type parameter
def * = id.? ~ name <>(Bar, Bar.unapply _)
}
Could somebody explain me what's the purpose of *
method here, what is <>
, why unapply
? and what is Projection - method ~
' returns the instance of Projection2
?
Source: (StackOverflow)
Is there a way I can neatly do an upsert operation in Slick? The following works but is too obscure/verbose and I need to explicitly state the fields that should be updated:
val id = 1
val now = new Timestamp(System.currentTimeMillis)
val q = for { u <- Users if u.id === id } yield u.lastSeen
q.update(now) match {
case 0 => Users.insert((id, now, now))
case _ => Unit
}
Source: (StackOverflow)
I have looked to the ends of the earth for the answer to this question. There is not much info out there on slick 2.0. Below is my code for my Addresses model, how would I have the method create return the id after it made the insert?
package models
import play.api.Play.current
import play.api.db.slick.Config.driver.simple._
import play.api.db.slick.DB
object Addresses{
val DB_URL:String = "jdbc:h2:mem:fls-play"
val DB_driver:String = "org.h2.Driver"
class Addresses(tag: Tag) extends Table[(String, String, String, String, String)](tag, "ADDRESSES"){
def id = column[Int]("ID", O.PrimaryKey, O.AutoInc)
def city = column[String]("CITY")
def stateName = column[String]("STATE_NAME")
def street1 = column[String]("STREET1")
def street2 = column[String]("STREET2")
def zip = column[String]("ZIP")
def * = (city, stateName, street1, street2, zip)
}
val addresses = TableQuery[Addresses]
def create(city:String, stateName:String, street1:String, street2:String, zip:String) {
DB.withSession{ implicit session =>
addresses += (city, stateName, street1, street2, zip)
}
}
}
Thank you!
Source: (StackOverflow)
I'm using Slick with a Play Framework 2.1 and I have some troubles.
Given the following entity...
package models
import scala.slick.driver.PostgresDriver.simple._
case class Account(id: Option[Long], email: String, password: String)
object Accounts extends Table[Account]("account") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def email = column[String]("email")
def password = column[String]("password")
def * = id.? ~ email ~ password <> (Account, Account.unapply _)
}
...I have to import a package for a specific database driver, but I want to use H2 for testing and PostgreSQL in production. How should I proceed?
I was able to workaround this by overriding the driver settings in my unit test:
package test
import org.specs2.mutable._
import play.api.test._
import play.api.test.Helpers._
import scala.slick.driver.H2Driver.simple._
import Database.threadLocalSession
import models.{Accounts, Account}
class AccountSpec extends Specification {
"An Account" should {
"be creatable" in {
Database.forURL("jdbc:h2:mem:test1", driver = "org.h2.Driver") withSession {
Accounts.ddl.create
Accounts.insert(Account(None, "user@gmail.com", "Password"))
val account = for (account <- Accounts) yield account
account.first.id.get mustEqual 1
}
}
}
}
I don't like this solution and I'm wondering if there is an elegant way to write DB-agnostic code so there are two different database engines used - one in testing and another in production?
I don't want to use evolution, either, and prefer to let Slick create the database tables for me:
import play.api.Application
import play.api.GlobalSettings
import play.api.Play.current
import play.api.db.DB
import scala.slick.driver.PostgresDriver.simple._
import Database.threadLocalSession
import models.Accounts
object Global extends GlobalSettings {
override def onStart(app: Application) {
lazy val database = Database.forDataSource(DB.getDataSource())
database withSession {
Accounts.ddl.create
}
}
}
The first time I start the application, everything works fine... then, of course, the second time I start the application it crashes because the tables already exist in the PostgreSQL database.
That said, my last two questions are:
- How can I determine whether or not the database tables already exist?
- How can I make the
onStart
method above DB-agnostic so that I can test my application with FakeApplication
?
Source: (StackOverflow)
I am attempting to learn to use Slick to query MySQL. I have the following type of query working to get a single Visit object:
Q.query[(Int,Int), Visit]("""
select * from visit where vistor = ? and location_code = ?
""").firstOption(visitorId,locationCode)
What I would like to know is how can I change the above to query to get a List[Visit] for a collection of Locations...something like this:
val locationCodes = List("loc1","loc2","loc3"...)
Q.query[(Int,Int,List[String]), Visit]("""
select * from visit where vistor = ? and location_code in (?,?,?...)
""").list(visitorId,locationCodes)
Is this possible with Slick?
Source: (StackOverflow)
i'm trying out Slick 3.0.0-RC1 and I'm running in to an odd problem.
Such is my code:
import slick.driver.SQLiteDriver.api._
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Await
import scala.concurrent.duration.Duration
lazy val db = Database.forURL(
url = "jdbc:sqlite:thebase.db",
driver = "org.sqlite.JDBC"
)
case class Issue(id: Option[Int], name: String)
class IssueTable(tag: Tag) extends Table[Issue](tag, "issue"){
def id = column[Int]("issue_id", O.PrimaryKey)
def name = column[String]("name")
def * = (id.?, name) <> (Issue.tupled, Issue.unapply _)
}
val issueQuery = TableQuery[IssueTable]
Await.result(db.run(issueQuery.result), Duration.Inf) // This does not compile
"Cannot resolve symbol result"
Now, reading the docs I can't really see why this should fail. Am I missing something here?
Edit: szeiger pointed out that this could be a bug in 'IntelliJ's presentation compiler'. And that was spot on. Very happy with an answer, but IDEA: I'm dissapoint son.
Source: (StackOverflow)
I'm writing a Scala web app using Play Framework 2.1.1 using a local Postgres database along with Slick 1.0.0, and I'm running into what seems to be a contradiction here.
This is the error I'm running into:
[SQLException: No suitable driver found for postgres://user:password@localhost:5432/postgres]
56
57 def instance = Action {
58 Database.forURL("postgres://user:password@localhost:5432/postgres", driver = "org.postgresql.Driver") withSession {
59 val q = Retailer.map(_.name)
60 Ok(views.html.instance(q.list, newRForm))
61 }
62 }
63
Where user
and password
are respectively the username
and password
of the Postgres database.
In java error (No suitable driver found) I found:
- You'll need to load the driver somewhere using
Class.forName("org.postgresql.Driver");
- You'll need the PostgreSQL driver's jar file in the classpath of your program.
In Application.scala
I have the following block of code:
{
println(ConfigFactory.load().getString("db.default.url"))
println(Class.forName("org.postgresql.Driver"))
}
Rerunning play compile
results in:
(Server started, use Ctrl+D to stop and go back to the console...)
[info] play - database [default] connected at jdbc:postgresql://localhost:5432/postgres
[info] play - Application started (Dev)
postgres://user:password@localhost:5432/postgres
class org.postgresql.Driver
[error] application -
! @6ei1nhkop - Internal server error, for (GET) [/instance] ->
play.api.Application$$anon$1: Execution exception[[SQLException: No suitable driver found for jdbc:postgresql://user:password@localhost:5432/postgres]]
at play.api.Application$class.handleError(Application.scala:289) ~[play_2.10.jar:2.1.1]
at play.api.DefaultApplication.handleError(Application.scala:383) [play_2.10.jar:2.1.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$17$$anonfun$apply$24.apply(PlayDefaultUpstreamHandler.scala:326) [play_2.10.jar:2.1.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$17$$anonfun$apply$24.apply(PlayDefaultUpstreamHandler.scala:324) [play_2.10.jar:2.1.1]
at play.api.libs.concurrent.PlayPromise$$anonfun$extend1$1.apply(Promise.scala:113) [play_2.10.jar:2.1.1]
at play.api.libs.concurrent.PlayPromise$$anonfun$extend1$1.apply(Promise.scala:113) [play_2.10.jar:2.1.1]
java.sql.SQLException: No suitable driver found for jdbc:postgresql://user:password@localhost:5432/postgres
at java.sql.DriverManager.getConnection(Unknown Source) ~[na:1.7.0_21]
at java.sql.DriverManager.getConnection(Unknown Source) ~[na:1.7.0_21]
at scala.slick.session.Database$$anon$2.createConnection(Database.scala:105) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.session.BaseSession.conn$lzycompute(Session.scala:207) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.session.BaseSession.conn(Session.scala:207) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.session.BaseSession.close(Session.scala:221) ~[slick_2.10-1.0.0.jar:1.0.0]
Then I run play dependencies
and the postgres .jar is resolved!
Here are the resolved dependencies of your application:
+--------------------------------------------------------+--------------------------------------------------------+-----------------------------------+
| ←[32mpostgresql:postgresql:9.1-901-1.jdbc4←[0m | ←[37mats:ats_2.10:1.0-SNAPSHOT←[0m | ←[37mAs postgresql-9.1-901-1.jdbc4.jar←[0m |
+--------------------------------------------------------+--------------------------------------------------------+-----------------------------------+
Why can't a suitable driver be found?
conf/application.conf
# Database configuration
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:postgres://user:password@localhost:5432/postgres"
db.default.user=user
db.default.password=password
project/Build.scala
import sbt._
import Keys._
import play.Project._
object ApplicationBuild extends Build {
val appName = "ats"
val appVersion = "1.0-SNAPSHOT"
val appDependencies = Seq(
// Add your project dependencies here,
jdbc,
"com.typesafe.slick" %% "slick" % "1.0.0",
"postgresql" % "postgresql" % "9.1-901-1.jdbc4"
)
val main = play.Project(appName, appVersion, appDependencies).settings(
// Add your own project settings here
)
I also have postgresql-9.2-1002.jdbc4.jar
and slick_2.10-1.0.1-RC1.jar
in my /lib
file, and my local Postgres version from doing a SELECT version();
is 9.2.4 The postgres driver resolution seems to be resolving to the 9.1 .jar though, and when I comment out the app dependency to let /lib
be included on its own, /lib
doesn't seem to be on Play's CLASSPATH.
I know that the Postgres url is correct, and I'm able to connect to my database when my application first launches.
Source: (StackOverflow)
Maybe a silly question. But I have not found an answer so far. So how do you represent the SQL's "LIKE" operator in SLICK?
Source: (StackOverflow)
I want to be able to fetch all records from a very big table using Slick.
If I try to do this through foreach, for or list fetching; I get an Out Of Memory Exception.
Is there any way to use "cursors" with Slick or lazy loading that only fetch the object when needed reducing the amount of memory used?
Source: (StackOverflow)
How does Slick translate code such as:
val q2 = for {
c <- Coffees if c.price < 9.0
s <- Suppliers if s.id === c.supID
} yield (c.name, s.name)
for(t <- q2) println(" " + t._1 + " supplied by " + t._2)
Into JDBC?
Does it use Scala Virtualized? Does it use some other method?
Source: (StackOverflow)
I am in the process of migrating from Slick to Slick 2, and in Slick 2 you are meant to use the tupled
method when projecting onto a case class (as shown here http://slick.typesafe.com/doc/2.0.0-RC1/migration.html)
The problem is when the case class has a companion object, i.e. if you have something like this
case class Person(firstName:String,lastName:String) {
}
Along with a companion object
object Person {
def something = "rawr"
}
In the same scope, the tupled
method no longer works, because its trying to run tupled
on the object
, instead of the case class
.
Is there a way to retrieve the case class
of Person
rather than the object
, so you can call tupled
properly?
Source: (StackOverflow)
I want to force slick to create queries like
select max(price) from coffees where ...
But slick's documentation doesn't help
val q = Coffees.map(_.price) //this is query Query[Coffees.type, ...]
val q1 = q.min // this is Column[Option[Double]]
val q2 = q.max
val q3 = q.sum
val q4 = q.avg
Because those q1-q4 aren't queries, I can't get the results but can use them inside other queries.
This statement
for {
coffee <- Coffees
} yield coffee.price.max
generates right query but is deprecated (generates warning: " method max in class ColumnExtensionMethods is deprecated: Use Query.max instead").
How to generate such query without warnings?
Another issue is to aggregate with group by:
"select name, max(price) from coffees group by name"
Tried to solve it with
for {
coffee <- Coffees
} yield (coffee.name, coffee.price.max)).groupBy(x => x._1)
which generates
select x2.x3, x2.x3, x2.x4 from (select x5."COF_NAME" as x3, max(x5."PRICE") as x4 from "coffees" x5) x2 group by x2.x3
which causes obvious db error
column "x5.COF_NAME" must appear in the GROUP BY clause or be used in an aggregate function
How to generate such query?
Source: (StackOverflow)
I have the follow enum:
object LoginStatus extends Enumeration() with BitmaskedEnumeration {
type LoginStatus = Value
val Active = Value("A")
val Inactive = Value("I")
}
I need to persist the value of the enum "A", but when the sql is generated the result is 0.
this is the table mapping:
object LoginTable extends Table[Login]("login") {
def idLogin = column[Int]("idlogin", O.PrimaryKey, O.AutoInc)
def cdLogin = column[String]("cdlogin", O.NotNull)
def cdPass = column[String]("cdPass", O.NotNull)
def stLogin = column[LoginStatus]("stlogin", O.NotNull, O.DBType("character(1)"))
}
how to persiste the enum value?
I implemented
implicit val charMapper = MappedTypeMapper.base[Char, String](
b => b.toString(),
i => i.charAt(0))
implicit def enum2StringMapper(enum: Enumeration) = MappedTypeMapper.base[enum.Value, Char](
b => b.toString.charAt(0),
i => enum.withName(i.toString))
implicit val LoginStatusMapper = enum2StringMapper(LoginStatus)
but result in:
[error] c.Login - Invalid value for type int : A
Source: (StackOverflow)