rds interview questions
Top rds frequently asked interview questions
I would like to use RDS (Remote Desktop Services) Dynamic Virtual Channels in my code. There are some samples on the net how to do the raw communication over virtual channels and even a library for .NET (RDPAddins.NET) but I was wondering if there is already a ready made WCF custom binding for Dynamic Virtual Channels?
Kind Regards
sn0wcat
[1] RDPAddins.NET
[2] How to write Terminal Services Add-In in C#
Source: (StackOverflow)
I'm using linux command line to access amazon rds mysql, however, i keep getting the Unknown MySQL host error.
here is the command:
mysql -uxxx -pxxxx -hmydb.xxxx.us-west-2.rds.amazonaws.com:3306
I have added MySQL rule in the security group, but it still does not work.
Source: (StackOverflow)
I am trying to find documentation regarding the supported data source for AWS Data Pipeline. What I need to do is export SQL Server RDS data to S3. I am finding plenty of documentation saying that Data Pipeline can use RDS as a source but every example I see is for MySQL RDS only.
Does anyone have experience with Data Pipeline and SQL Server RDS? If so, what data node do you use to connect to SQL Server RDS (e.g. MySqlDataNode, SqlDataNode)?
The end target is to move data from SQL Server RDS to AWS Red Shift.
Thanks
Source: (StackOverflow)
It is possible to create a backup of a database running on an Amazon RDS instance and restore it on a local machine using the standard Task -> Backup and Task -> Restore features within Microsoft SQL Server Management Studio? If so, how do you go about doing this?
Note, this question does not pertain to whether you can bulk copy the data or generate the scripts, but whether you can create a true .BAK database backup which can be restored using the SSMS restore feature.
Source: (StackOverflow)
Is it possible to enable RDS for ColdFusion 8 on a server that requires Basic Authentication? The web server is IIS.
Source: (StackOverflow)
When you create a new amazon rds instance, you are offered to choose true/false for the "publicly accessible" option,
Is there a way to change this for an existing instance?
Thank you,
Ron
Source: (StackOverflow)
I'd like to join IP routing table information to IP whois information. I'm using Amazon's RDS which means I can't use the Postgres ip4r extension, and so I am instead using int8range types to represent the IP address ranges, with gist indexes.
My tables look like this:
=> \d routing_details
Table "public.routing_details"
Column | Type | Modifiers
----------+-----------+-----------
asn | text |
netblock | text |
range | int8range |
Indexes:
"idx_routing_details_netblock" btree (netblock)
"idx_routing_details_range" gist (range)
=> \d netblock_details
Table "public.netblock_details"
Column | Type | Modifiers
------------+-----------+-----------
range | int8range |
name | text |
country | text |
source | text |
Indexes:
"idx_netblock_details_range" gist (range)
The full routing_details table contains just under 600K rows, and netblock_details contains around 8.25M rows. There are overlapping ranges in both tables, but for each range in the routing_details table I want to get the single best (smallest) match from the netblock_details table.
I came up with 2 different queries that I think will return the accurate data, one using window functions and one using DISTINCT ON:
EXPLAIN SELECT DISTINCT ON (r.netblock) *
FROM routing_details r JOIN netblock_details n ON r.range <@ n.range
ORDER BY r.netblock, upper(n.range) - lower(n.range);
QUERY PLAN
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Unique (cost=118452809778.47..118477166326.22 rows=581300 width=91)
Output: r.asn, r.netblock, r.range, n.range, n.name, n.country, r.netblock, ((upper(n.range) - lower(n.range)))
-> Sort (cost=118452809778.47..118464988052.34 rows=4871309551 width=91)
Output: r.asn, r.netblock, r.range, n.range, n.name, n.country, r.netblock, ((upper(n.range) - lower(n.range)))
Sort Key: r.netblock, ((upper(n.range) - lower(n.range)))
-> Nested Loop (cost=0.00..115920727265.53 rows=4871309551 width=91)
Output: r.asn, r.netblock, r.range, n.range, n.name, n.country, r.netblock, (upper(n.range) - lower(n.range))
Join Filter: (r.range <@ n.range)
-> Seq Scan on public.routing_details r (cost=0.00..11458.96 rows=592496 width=43)
Output: r.asn, r.netblock, r.range
-> Materialize (cost=0.00..277082.12 rows=8221675 width=48)
Output: n.range, n.name, n.country
-> Seq Scan on public.netblock_details n (cost=0.00..163712.75 rows=8221675 width=48)
Output: n.range, n.name, n.country
(14 rows) -> Seq Scan on netblock_details n (cost=0.00..163712.75 rows=8221675 width=48)
EXPLAIN VERBOSE SELECT * FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY r.range ORDER BY UPPER(n.range) - LOWER(n.range)) AS rank
FROM routing_details r JOIN netblock_details n ON r.range <@ n.range
) a WHERE rank = 1 ORDER BY netblock;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=118620775630.16..118620836521.53 rows=24356548 width=99)
Output: a.asn, a.netblock, a.range, a.range_1, a.name, a.country, a.rank
Sort Key: a.netblock
-> Subquery Scan on a (cost=118416274956.83..118611127338.87 rows=24356548 width=99)
Output: a.asn, a.netblock, a.range, a.range_1, a.name, a.country, a.rank
Filter: (a.rank = 1)
-> WindowAgg (cost=118416274956.83..118550235969.49 rows=4871309551 width=91)
Output: r.asn, r.netblock, r.range, n.range, n.name, n.country, row_number() OVER (?), ((upper(n.range) - lower(n.range))), r.range
-> Sort (cost=118416274956.83..118428453230.71 rows=4871309551 width=91)
Output: ((upper(n.range) - lower(n.range))), r.range, r.asn, r.netblock, n.range, n.name, n.country
Sort Key: r.range, ((upper(n.range) - lower(n.range)))
-> Nested Loop (cost=0.00..115884192443.90 rows=4871309551 width=91)
Output: (upper(n.range) - lower(n.range)), r.range, r.asn, r.netblock, n.range, n.name, n.country
Join Filter: (r.range <@ n.range)
-> Seq Scan on public.routing_details r (cost=0.00..11458.96 rows=592496 width=43)
Output: r.asn, r.netblock, r.range
-> Materialize (cost=0.00..277082.12 rows=8221675 width=48)
Output: n.range, n.name, n.country
-> Seq Scan on public.netblock_details n (cost=0.00..163712.75 rows=8221675 width=48)
Output: n.range, n.name, n.country
(20 rows)
The DISTINCT ON seems slightly more efficient, so I've proceeded with that one. When I run the query against the full dataset I get an out of disk space error after around a 24h wait. I've created a routing_details_small table with a subset of N rows of the full routing_details table to try and understand what's going on.
With N=1000
=> EXPLAIN ANALYZE SELECT DISTINCT ON (r.netblock) *
-> FROM routing_details_small r JOIN netblock_details n ON r.range <@ n.range
-> ORDER BY r.netblock, upper(n.range) - lower(n.range);
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=4411888.68..4453012.20 rows=999 width=90) (actual time=124.094..133.720 rows=999 loops=1)
-> Sort (cost=4411888.68..4432450.44 rows=8224705 width=90) (actual time=124.091..128.560 rows=4172 loops=1)
Sort Key: r.netblock, ((upper(n.range) - lower(n.range)))
Sort Method: external sort Disk: 608kB
-> Nested Loop (cost=0.41..1780498.29 rows=8224705 width=90) (actual time=0.080..101.518 rows=4172 loops=1)
-> Seq Scan on routing_details_small r (cost=0.00..20.00 rows=1000 width=42) (actual time=0.007..1.037 rows=1000 loops=1)
-> Index Scan using idx_netblock_details_range on netblock_details n (cost=0.41..1307.55 rows=41124 width=48) (actual time=0.063..0.089 rows=4 loops=1000)
Index Cond: (r.range <@ range)
Total runtime: 134.999 ms
(9 rows)
With N=100000
=> EXPLAIN ANALYZE SELECT DISTINCT ON (r.netblock) *
-> FROM routing_details_small r JOIN netblock_details n ON r.range <@ n.range
-> ORDER BY r.netblock, upper(n.range) - lower(n.range);
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=654922588.98..659034941.48 rows=200 width=144) (actual time=28252.677..29487.380 rows=98992 loops=1)
-> Sort (cost=654922588.98..656978765.23 rows=822470500 width=144) (actual time=28252.673..28926.703 rows=454856 loops=1)
Sort Key: r.netblock, ((upper(n.range) - lower(n.range)))
Sort Method: external merge Disk: 64488kB
-> Nested Loop (cost=0.41..119890431.75 rows=822470500 width=144) (actual time=0.079..24951.038 rows=454856 loops=1)
-> Seq Scan on routing_details_small r (cost=0.00..1935.00 rows=100000 width=96) (actual time=0.007..110.457 rows=100000 loops=1)
-> Index Scan using idx_netblock_details_range on netblock_details n (cost=0.41..725.96 rows=41124 width=48) (actual time=0.067..0.235 rows=5 loops=100000)
Index Cond: (r.range <@ range)
Total runtime: 29596.667 ms
(9 rows)
With N=250000
=> EXPLAIN ANALYZE SELECT DISTINCT ON (r.netblock) *
-> FROM routing_details_small r JOIN netblock_details n ON r.range <@ n.range
-> ORDER BY r.netblock, upper(n.range) - lower(n.range);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=1651822953.55..1662103834.80 rows=200 width=144) (actual time=185835.443..190143.266 rows=247655 loops=1)
-> Sort (cost=1651822953.55..1656963394.18 rows=2056176250 width=144) (actual time=185835.439..188779.279 rows=1103850 loops=1)
Sort Key: r.netblock, ((upper(n.range) - lower(n.range)))
Sort Method: external merge Disk: 155288kB
-> Nested Loop (cost=0.28..300651962.46 rows=2056176250 width=144) (actual time=19.325..177403.913 rows=1103850 loops=1)
-> Seq Scan on netblock_details n (cost=0.00..163743.05 rows=8224705 width=48) (actual time=0.007..8160.346 rows=8224705 loops=1)
-> Index Scan using idx_routing_details_small_range on routing_details_small r (cost=0.28..22.16 rows=1250 width=96) (actual time=0.018..0.018 rows=0 loops=8224705)
Index Cond: (range <@ n.range)
Total runtime: 190413.912 ms
(9 rows)
Against the full table with 600k rows the query fails after around 24h with an error about running out of disk space, which is presumably caused by the external merge step. So this query is working well and very quickly with a small routing_details table, but is scaling very poorly.
Suggestions for how to improve my query, or perhaps even schema changes I could make so that this query will work efficiently on the full dataset?
Source: (StackOverflow)
I can see from the AWS console that my RDS instance is being backed up once a day. From the FAQ I understand that it is being backup on S3. But when I use the console to view my S3 buckets, I don't see the RDS backup.
So:
- How do I get my hands on my RDS backup?
- Once I have it how do I use it to restore my DB i.e is it a regular mysqldump file or something else?
Source: (StackOverflow)
Local workstation: Win 7
Terminal Server: Win 2008 Server
Outlook: 2003 running on local workstation.
I'm trying to implement copying and pasting of Outlook messages from local workstation to terminal server.
Using the code below, I am able to copy and paste files from local workstation to server...
TmyMemoryStream = class(TMemoryStream);
...
procedure TmyMemoryStream.LoadFromIStream(AStream : IStream);
var
iPos : Int64;
aStreamStat : TStatStg;
oOLEStream: TOleStream;
begin
AStream.Seek(0, STREAM_SEEK_SET, iPos);
AStream.Stat(aStreamStat, STATFLAG_NONAME);
oOLEStream := TOLEStream.Create(AStream);
try
Self.Clear;
Self.Position := 0;
Self.CopyFrom( oOLEStream, aStreamStat.cbSize );
Self.Position := 0;
finally
oOLEStream.Free;
end;
end;
...but when I try to copy and paste an Outlook message, the stream size (aStreamStat.cbSize
) is 0. I am able to obtain the message subject (file name), but unable to read the stream content.
What is wrong with my code?
Complete unit code:
unit Unit1;
interface
uses
dialogs,
Windows, ComCtrls, ActiveX, ShlObj, ComObj, StdCtrls, AxCtrls,
SysUtils, Controls, ShellAPI, Classes, Forms;
type
{****************************************************************************}
TMyDataObjectHandler = class;
PFileDescriptorArray = Array of TFileDescriptor;
{****************************************************************************}
TMyDataObjectHandler = class(TObject)
strict private
CF_FileContents : UINT;
CF_FileGroupDescriptorA : UINT;
CF_FileGroupDescriptorW : UINT;
CF_FileDescriptor : UINT;
FDirectory : string;
function _CanCopyFiles(const ADataObject : IDataObject) : boolean;
function _DoCopyFiles(const ADataObject : IDataObject) : HResult;
//function _ExtractFileNameWithoutExt(const FileName: string): string;
function _CopyFiles(AFileNames: TStringList): HResult;
procedure _GetFileNames(AGroup: PDropFiles; var AFileNames: TStringList);
procedure _ProcessAnsiFiles(ADataObject: IDataObject; AGroup: PFileGroupDescriptorA);
function _ProcessDropFiles(ADataObject: IDataObject; AGroup: PDropFiles): HResult;
procedure _ProcessFileContents(ADataObject: IDataObject; Index: UINT; AFileName: string; AFileSize : Cardinal);
function _ProcessStorageMedium(ADataObject: IDataObject; AMedium: STGMEDIUM; AFilename: string; AFileSize : Cardinal): HResult;
function _ProcessStreamMedium(ADataObject: IDataObject; AMedium: STGMEDIUM; AFileName: String; AFileSize : Cardinal): HResult;
procedure _ProcessUnicodeFiles(ADataObject: IDataObject; AGroup: PFileGroupDescriptorW );
function _CanCopyFile(AFileName: string): boolean;
public
constructor Create; reintroduce;
destructor Destroy; override;
function CanCopyFiles(const ADataObject : IDataObject; const ADirectory : string) : boolean;
procedure CopyFiles(const ADataObject : IDataObject; const ADirectory : string);
end;
{****************************************************************************}
TMyMemoryStream = class( TMemoryStream )
public
procedure LoadFromIStream(AStream : IStream; AFileSize : Cardinal);
function GetIStream : IStream;
end;
{****************************************************************************}
implementation
{------------------------------------------------------------------------------}
{ TMyDataObjectHandler }
function TMyDataObjectHandler.CanCopyFiles(const ADataObject : IDataObject; const ADirectory : string): boolean;
begin
Result := IsDirectoryWriteable( ADirectory);
if Result then
begin
Result := _CanCopyFiles(ADataObject);
end;
end;
{------------------------------------------------------------------------------}
constructor TMyDataObjectHandler.Create;
begin
inherited Create;
CF_FileContents := $8000 OR RegisterClipboardFormat(CFSTR_FILECONTENTS) AND $7FFF;
CF_FileGroupDescriptorA := $8000 OR RegisterClipboardFormat(CFSTR_FILEDESCRIPTORA) AND $7FFF;
CF_FileGroupDescriptorW := $8000 OR RegisterClipboardFormat(CFSTR_FILEDESCRIPTORW) AND $7FFF;
CF_FileDescriptor := $8000 OR RegisterClipboardFormat(CFSTR_FILEDESCRIPTOR) AND $7FFF;
end;
{------------------------------------------------------------------------------}
destructor TMyDataObjectHandler.Destroy;
begin
//
inherited;
end;
{------------------------------------------------------------------------------}
procedure TMyDataObjectHandler.CopyFiles(const ADataObject : IDataObject; const ADirectory : string);
begin
FDirectory := ADirectory;
_DoCopyFiles(ADataObject);
end;
{------------------------------------------------------------------------------}
function TMyDataObjectHandler._CanCopyFiles(const ADataObject : IDataObject) : boolean;
var
eFORMATETC : IEnumFORMATETC;
OLEFormat : TFormatEtc;
iFetched : Integer;
begin
Result := false;
if Succeeded(ADataObject.EnumFormatEtc(DATADIR_GET, eFormatETC)) then
begin
if Succeeded(eFormatETC.Reset) then
begin
while(eFORMATETC.Next(1, OLEFormat, @iFetched) = S_OK) and (not Result) do
begin
Result := ( OLEFormat.cfFormat = CF_FileGroupDescriptorW )
or
( OLEFormat.cfFormat = CF_FileGroupDescriptorA )
or
( OLEFormat.cfFormat = CF_HDROP );
end;
end;
end;
end;
{------------------------------------------------------------------------------}
function TMyDataObjectHandler._CanCopyFile( AFileName : string ) : boolean;
begin
Result := not FileExists( ExpandUNCFileName(FDirectory + ExtractFileName(AFileName)) );
end;
{------------------------------------------------------------------------------}
function TMyDataObjectHandler._CopyFiles(AFileNames : TStringList) : HResult;
var
i: Integer;
begin
Result := S_OK;
i := 0;
while(i < AFileNames.Count) do
begin
if _CanCopyFile(AFileNames[i]) then
begin
Copyfile( Application.MainForm.Handle, PChar(AFileNames[i]), PChar(FDirectory + ExtractFileName(AFileNames[i])), false );
end;
inc(i);
end;
end;
{------------------------------------------------------------------------------}
procedure TMyDataObjectHandler._GetFileNames(AGroup: PDropFiles; var AFileNames : TStringList);
var
sFilename : PAnsiChar;
s : string;
begin
sFilename := PAnsiChar(AGroup) + AGroup^.pFiles;
while (sFilename^ <> #0) do
begin
if (AGroup^.fWide) then
begin
s := PWideChar(sFilename);
Inc(sFilename, (Length(s) + 1) * 2);
end
else
begin
s := PWideChar(sFilename);
Inc(sFilename, Length(s) + 1);
end;
AFileNames.Add(s);
end;
end;
{------------------------------------------------------------------------------}
function TMyDataObjectHandler._ProcessDropFiles(ADataObject: IDataObject; AGroup: PDropFiles) : HResult;
var
sFiles : TStringList;
begin
Result := S_OK;
sFiles := TStringList.Create;
try
_GetFileNames( AGroup, sFiles );
if (sFiles.Count > 0) then
begin
Result := _CopyFiles( sFiles );
end;
finally
sFiles.Free;
end;
end;
{------------------------------------------------------------------------------}
function TMyDataObjectHandler._ProcessStorageMedium(ADataObject: IDataObject; AMedium: STGMEDIUM; AFilename : string; AFileSize : Cardinal) : HResult;
var
StorageInterface : IStorage;
FileStorageInterface : IStorage;
sGUID : PGuid;
iCreateFlags : integer;
begin
Result := S_OK;
if _CanCopyFile(AFileName) then
begin
sGUID := nil;
StorageInterface := IStorage(AMedium.stg);
iCreateFlags := STGM_CREATE OR STGM_READWRITE OR STGM_SHARE_EXCLUSIVE;
Result := StgCreateDocfile(PWideChar(ExpandUNCFileName(FDirectory + AFilename)), iCreateFlags, 0, FileStorageInterface);
if Succeeded(Result) then
begin
Result := StorageInterface.CopyTo(0, sGUID, nil, FileStorageInterface);
if Succeeded(Result) then
begin
Result := FileStorageInterface.Commit(0);
end;
FileStorageInterface := nil;
end;
StorageInterface := nil;
end;
end;
{------------------------------------------------------------------------------}
function TMyDataObjectHandler._ProcessStreamMedium(ADataObject: IDataObject; AMedium: STGMEDIUM; AFileName : String; AFileSize : Cardinal) : HResult;
var
Stream : IStream;
myStream: TMyMemoryStream;
begin
Result := S_OK;
if _CanCopyFile(AFileName) then
begin
Stream := ISTREAM(AMedium.stm);
if (Stream <> nil) then
begin
myStream := TMyMemoryStream.Create;
try
myStream.LoadFromIStream(Stream, AFileSize);
myStream.SaveToFile(ExpandUNCFileName(FDirectory + AFileName));
finally
myStream.Free;
end;
end;
end;
end;
{------------------------------------------------------------------------------}
procedure TMyDataObjectHandler._ProcessFileContents(ADataObject: IDataObject; Index: UINT; AFileName : string; AFileSize : Cardinal);
var
Fetc: FORMATETC;
Medium: STGMEDIUM;
begin
Fetc.cfFormat := CF_FILECONTENTS;
Fetc.ptd := nil;
Fetc.dwAspect := DVASPECT_CONTENT;
Fetc.lindex := Index;
Fetc.tymed := TYMED_HGLOBAL or TYMED_ISTREAM or TYMED_ISTORAGE;
if SUCCEEDED(ADataObject.GetData(Fetc, Medium)) then
begin
try
case Medium.tymed of
TYMED_HGLOBAL : ;
TYMED_ISTREAM : _ProcessStreamMedium(ADataObject, Medium, AFileName, AFileSize);
TYMED_ISTORAGE : _ProcessStorageMedium(ADataObject, Medium, AFileName, AFileSize);
else ;
end;
finally
ReleaseStgMedium(Medium);
end;
end;
end;
{------------------------------------------------------------------------------}
procedure TMyDataObjectHandler._ProcessAnsiFiles(ADataObject: IDataObject; AGroup: PFileGroupDescriptorA);
var
I : UINT;
sFileName : AnsiString;
iSize : Cardinal;
begin
for I := 0 to AGroup^.cItems-1 do
begin
sFileName := AGroup^.fgd[I].cFileName;
if (AGroup^.fgd[I].dwFlags and FD_FILESIZE) = FD_FILESIZE then
begin
iSize := (AGroup^.fgd[I].nFileSizeLow and $7FFFFFFF);
end
else
begin
iSize := 0;
end;
_ProcessFileContents(ADataObject, I, string(sFileName), iSize);
end;
end;
{------------------------------------------------------------------------------}
procedure TMyDataObjectHandler._ProcessUnicodeFiles(ADataObject : IDataObject;
AGroup : PFileGroupDescriptorW);
var
I: UINT;
sFileName: WideString;
iSize: Cardinal;
begin
for I := 0 to AGroup^.cItems-1 do
begin
sFileName := AGroup^.fgd[I].cFileName;
if (AGroup^.fgd[I].dwFlags and FD_FILESIZE) = FD_FILESIZE then
begin
iSize := (AGroup^.fgd[I].nFileSizeLow and $7FFFFFFF);
end
else
begin
iSize := 0;
end;
_ProcessFileContents(ADataObject, I, sFileName, iSize);
end;
end;
{------------------------------------------------------------------------------}
function TMyDataObjectHandler._DoCopyFiles(const ADataObject : IDataObject) : HResult;
var
Fetc : FORMATETC;
Medium : STGMEDIUM;
Enum : IEnumFORMATETC;
Group : Pointer;
begin
Result := ADataObject.EnumFormatEtc(DATADIR_GET, Enum);
if FAILED(Result) then
Exit;
while (true) do
begin
Result := (Enum.Next(1, Fetc, nil));
if (Result = S_OK) then
begin
if (Fetc.cfFormat = CF_FILEGROUPDESCRIPTORA) or
(Fetc.cfFormat = CF_FILEGROUPDESCRIPTORW) or
(Fetc.cfFormat = CF_HDROP) then
begin
Result := ADataObject.GetData(Fetc, Medium);
if FAILED(Result) then
Exit;
try
if (Medium.tymed = TYMED_HGLOBAL) then
begin
Group := GlobalLock(Medium.hGlobal);
try
if Fetc.cfFormat = CF_FILEGROUPDESCRIPTORW then
begin
_ProcessUnicodeFiles(ADataObject, PFileGroupDescriptorW(Group));
break;
end
else if Fetc.cfFormat = CF_FILEGROUPDESCRIPTORA then
begin
_ProcessAnsiFiles(ADataObject, PFileGroupDescriptorA(Group));
break;
end
else if Fetc.cfFormat = CF_HDROP then
begin
_ProcessDropFiles(ADataObject, PDropFiles(Group));
break;
end;
finally
GlobalUnlock(Medium.hGlobal);
end;
end;
finally
ReleaseStgMedium(Medium);
end;
end;
end
else
break;
end;
end;
{------------------------------------------------------------------------------}
//function TMyDataObjectHandler._ExtractFileNameWithoutExt(const FileName: string): string;
//begin
// Result := ChangeFileExt(ExtractFileName(FileName), EmptyStr);
//end;
{------------------------------------------------------------------------------}
{ TMyMemoryStream }
function TMyMemoryStream.GetIStream: IStream;
var
oStreamAdapter : TStreamAdapter;
tPos : Int64;
begin
oStreamAdapter := TStreamAdapter.Create(Self);
oStreamAdapter.Seek(0, 0, tPos);
Result := oStreamAdapter as IStream;
end;
procedure TMyMemoryStream.LoadFromIStream(AStream : IStream; AFileSize : Cardinal);
var
iPos : Int64;
aStreamStat : TStatStg;
oOLEStream: TOleStream;
HR: Int64;
begin
oOLEStream := TOLEStream.Create(AStream);
try
Self.Clear;
Self.Position := 0;
try
HR := Self.CopyFrom( oOLEStream, 0 );
except
on E : Exception do
begin
showMessage(E.ClassName + ' ' + E.Message);
end;
end;
Self.Position := 0;
finally
oOLEStream.Free;
end;
end;
end.
Source: (StackOverflow)
I am trying to run the classic asp application which uses RDS (Remote data service) on Windows Server 2008
<object id="RDS_ACCOUNTS" CLASSID="clsid:BD96C556-65A3-11D0-983A-00C04FC29E33"
height=1 width=1 VIEWASTEXT>
Following is the code written in window_load() event
RDS_ACCOUNTS.ExecuteOptions = 1
RDS_ACCOUNTS.FetchOptions = 1
RDS_ACCOUNTS.Server = "<%=strServer%>"
RDS_ACCOUNTS.Connect =Connect
RDS_ACCOUNTS.SQL = "SELECT ACCOUNT_TYPE_ID, CLIENT_ID, ACCOUNT_TYPE_DESC
FROM TBL_AP_CHART_ACCOUNT_TYPE
WHERE CLIENT_ID=<% = Session("ClientID")%>
ORDER BY ACCOUNT_TYPE_DESC "
RDS_ACCOUNTS.Refresh
Dim AccountRst
Set AccountRst = RDS_ACCOUNTS.Recordset
Here connect variable have its value from RDSConn.inc file which have the value
Handler=MSDFMAP.Handler;Data Source=AMTAPP;
This handler picks value from msdfmap.ini file located in C:\Windows
folder which contains the OLEDB Connection String
or DSN Name.
But when i run this code it gives me exception
Object or Provider is not able to perform the requested operation on RDS_ACCOUNTS.Refresh method.
Source: (StackOverflow)
I have set up an RDS Database Instance with a security group where I use my EC2 Elastic IP as my CIDR/IP. I have also associated the security group with my EC2.
My security group on the EC2 Instance looks like this. I associated one of the 3306 ports with my Elastic IP.
I have created a database and a table in phpMyAdmin and am trying to test it out by printing out all the values by using the code below:
<?php
// set database server access variables:
$host = "XXXXXXX.XXXXXXXXXX.eu-west-1.rds.amazonaws.com";
$user = "XXXXXXX";
$pass = "XXXXXXXX";
$db = "XXXXXXX";
$con = mysql_connect($host, $user, $pass, $db);
// Check connection
if (mysql_connect_error())
{
echo "Failed to connect to MySQL: " . mysql_connect_error();
}else { echo "You have connected successfully!";
}
$result = mysql_query($con, "SELECT * FROM `XXXXX` LIMIT 0, 30 ");
echo "<p>starting again..</p>";
while($row = mysql_fetch_assoc($result)){
//iterate over all the fields
foreach($row as $key => $val){
//generate output
echo $key . ": " . $val . "<BR />";
}
}
mysql_close($connection);
?>
The error that I am getting is Unknown database 'XXXX'
. Any ideas?
EDIT 1
I have just changed all the mysqli
statements to mysql
. But the connection is still not successful i.e. the database cannot be found.
EDIT 2
Here is a screenshot of my mysql privileges.
Source: (StackOverflow)
I'm new to AWS and RDS. I've combed through help files and other stackflow questions, but can't seem to find out if i'm doing something wrong.
When I go to my RDS Instance, I see
Security Groups:default( active )
I click default, and it takes me to the SG page, where I create new groups.
However, any rules I put in those new groups don't work, only the rules I put in the default group works. In some of the documentation, I see the screenshots and the beside the Security Groups on the instance page, it doesn't list default, but a user created group.
So is there some way to make all the new groups active or a way to change which group has precedence on that Instance page? Or am I going to have to put all my rules in the default group?
thanks in advance
Source: (StackOverflow)
I've set up a DB Instance on AWS, and looking around all the guides I should now be able to go on MySQL Workbench and connect it succesfully, as I have a hostname, port, user ID and password.
However, when I enter all the details I specified when creating the instance, I get the error:
Failed to Connect to MySQL at with user
then below it says the same error with (10060) in brackets. I looked up this error but couldn't find any relevant solution.
Source: (StackOverflow)
I'm trying to figure out hosting requirements for our organization's site. Any guidance to this would be much appreciated!
I need to know how many / which kind of instances I'll need so I can start planning this in my head.
Info:
- We'll be running ExpressionEngine (PHP) to power our sites, there will two sites so we'll be using the Multi-site manager
- We receive on average 85k hits daily - off months are around 6k a day, but it all balances out to an 85k average
- All images / media will be hosted on S3
- Database to run on RDS
- I'll cache the pages in the CMS so minimize load
I know we'll need a few EC2 instances, wondering what you guys suggest in terms of number of instances / which ones. I haven't used the AWS load balancers before, but I'm sure I'll need them.
I appreciate any suggestions, as well as links I could read up on the requirements. Thank you!
Source: (StackOverflow)
I am using Amazon RDS for mysql db. I want to run some SET commands for eg:
SET GLOBAL group_concat_max_len =18446744073709551615
But when I run this command I get this error
ERROR 1227 (42000): Access denied; you need (at least one of) the SUPER privilege(s) for this operation
When I try to add privileges, it does not allow me to add. Any help or inputs?
Source: (StackOverflow)