오라클 강의(11강~13강)

Posted by HULIA(휴리아)
2018. 6. 3. 01:37 백엔드개발/데이터베이스

11강

서브쿼리

-하나의 select 문장 내에 포함된 또 하나의 select 문장

-비교연산자의 오른쪽에 기술해야 하고, 반드시 괄호안에 넣어야 함

-메인쿼리가 실행되기 이전엔 한번만 실행됨


단일행 서브쿼리

-수행 결과가 오직 하나의 행만을 반환하는 것

-단일행 비교 연산자 : =, <>, >, >=, <, <=


다중행 서브 쿼리

-수행 결과가 두건 이상 출력되는 것

-다중행 연산자 : in, any or some, all, exists

in -> 메인 쿼리의 비교조건이 서브 쿼리의 검색결과가 하나라도 일치하면 참


all -> 메인 쿼리의 비교조건이 서브 쿼리의 검색결과가 모든 값이 일치하면 참

그래서 <all 최소값을 반환, >all 최대값을 반환

ex) select ename, sal from emp where sal > all(select sal from emp where deptno = 30);

30번 부서의 급여보다 모든 값이 큰 경우만 출력

단일행 서브쿼리로도 가능(max이용)


any -> 메인 쿼리의 비교조건이 서브 쿼리의 검색결과와 하나 이상이 일치하면 참

ex) select ename, sal from emp where sal > any(select sal from emp where deptno = 30);

30번 부서의 급여의 최소값보다 크면 출력

단일행 서브쿼리로도 가능(min이용)


exists-> 메인 쿼리의 비교조건이 서브 쿼리의 검색결과가 하나라도 일치하면 참

exists는 해당 행이 존재하는지의 여부만 확인

in은 실제 존재하는 데이터들의 모든값까지 확인


exists를 쓸때는 메인쿼리와 서브쿼리의 테이블간의 이퀄조인을 써야 한다는 것을 알고 있어야 한다

ex) select ename, deptno, sal from emp e where exists(select * from dept d where d.deptno in (10,20) and e.deptno 


= d.deptno);

ex) select ename, deptno, sal from emp e where exists(select * from dept d where d.deptno =40 and e.deptno = 


d.deptno);


12강

DDL(생성, 변경, 삭제)


create table 할때 사용할 수 있는 데이터형

char(size) - 고정길이 문자 데이터 / varchar2와 동일한 형태의 자료를 저장할 수 있고, 입력된 자료의 길이와는 상관없


이 정해진 길이만큼 저장 영역을 차지함 / 최소크기 1

varchar2(size) - 가변 길이 문자 데이터 / 실제 입력된 문자열의 길이만큼 저장 영역을 차지 / 최대 크기는 명시해야 하


며 최소 크기는 1

number - internal number format 최고 40자리까지의 숫자를 저장할 수 있음 / 이때 소수점이나 부호는 길이에 포함되지 


않음

number(w) - w자리까지의 수치로 최대 38자리까지 가능함(38자리가 유효숫자)

number(w,d) - w는 전체길이, d는 소수점 이하 자릿수 / 소수점은 자릿수에 포함되지 않음

date - BC 4712년 1월 1일~ AD 4712년 12월 31일까지의 날짜

long - 가변 길이의 문자형 데이터 타입 최대 2GB

lob - 2GB까지의 가변길이 바이너리 데이터를 저장시킬 수 있음, 이미지 문서, 실행파일을 저장할수 있음

rowid - tree-piece format을 가짐, DB에 저장되어 있지 않으며, DB data도 아님

bfile - 대용량의 바이너리데이터를 파일 형태로 저장함 최대 4GB

timestamp(n) - date형의 확장된 형태

interval year to month - 년과 월을 이용하여 기간을 저장

interval day to second - 일,시,분,초 를 이용하여 기간을 저장 / 두 날짜값의 정확한 차이를 표현


식별자 

-반드시 문자로 시작

-1~30자까지 가능

-A-Z까지의 대소문자, 0-9까지의 숫자, 특수 기호는 (_, $, #)만 포함가능

-오라클에서 사용되는 예약어나 다른 객체명과 중복이 불가함

-공백 허용을 안함


DESC로 테이블 구조 파악


서브쿼리로 테이블 생성하기(테이블의 구조와 내용을 복사)

동일한 구조와 내용을 갖는 테이블을 생성할 수 있음

ex) create table emp02 as select * from emp;

일부의 구조와 내용을 갖는 테이블을 생성할 수 있음

ex) create table emp03 as select empno, ename from emp;

기존 테이블의 일부의행만 복사

ex) create table emp04 as select * from emp where deptno =10;



서브쿼리로 테이블 생성하기(테이블의 구조만 복사) 

-위의 서브쿼리를 where절에 항상 거짓이 되는 조건을 지정하면 됨

ex)create table emp05 as select * from emp where 1=0;

***********제약조건은 복사되지 않음(서브쿼리로 테이블 생성시)



가상 컬럼을 가지는 테이블 생성

create table vt001

(no1 number, no2 number, no3 number generated always as (no1+no2) virtual);

no3는 no1과 no2의 더한값이 저장됨

no1과 no2만 insert할수 있고 해당 컬럼의 값이 변경되면 no3값도 자동으로 변경됨


테이블 구조 변경하는 alter table

-add column절

ex)alter table emp01 add(job varchar(9));

새로운 컬럼은 테이블 맨 마지막에 추가됨

이미 이전에 추가해 놓은 행이 존재한다면, 그 행에도 컬럼이 추가되지만 컬럼값은 null값으로 입력됨


-modify column절

ex)alter table emp01 modify(job varchar(30));

해당 컬럼에 자료가 없을 경우는

데이터 타입과 컬럼의 크기를 변경할 수 있음

해당 컬럼의 자료가 있을 경우는 

데이터 타입 변경할수 없고 크기는 늘릴수는 있지만 현재 가지고 있는 데이터 크기보다 작은 크기로는 변경할 수 없음

***데이터가 존재하는데도 데이터의 타입을 변경할 수 있는 경우는 오직 char와 varchar2사이의 타입을 변경하는 경우^^(


같거나 크게만 변경)


-drop column절

ex) alter tabe emp01 drop column job; 



set unused 옵션 사용하기

-컬럼을 삭제하지 않지만, 컬럼의 사용을 논리적으로 제한할 수 있음

-테이블에 저장된 내용이 많을 경우 해당 테이블에서 컬럼을 삭제하려면 꽤 오랜시간 소용됨

-그 시간동안 lock 걸려서 다른 작업을 할 수 없음 -> 이때 set unused 옵션 사용(락 걸리지 않음)

ex) alter table emp02 set unused(job);

사용빈도가 가장 낮은 시간에 삭제 작업을 진행

ex)alter table emp02 drop unused columns;



drop table

기존 테이블의 존재를 제거(구조와 내용을 함께)


테이블의 모든 로우를 제거하는 truncate table

내용만 제거함 / 구조는 남아있음


테이블명 변경하는 rename

ex) rename emp05 to test;


delete과 truncate와 drop의 차이점

delete는 테이블 용량은 줄어들지 않음 / 내용만 지워짐 / 원하는 데이터만 골라서 삭제 / 되돌릴수 있음

truncate 최초 테이블이 만들어진 상태로 만듦 / 용량 줄어들고 index모두 지워짐 / 데이터만 삭제 / 되돌릴 수 없음

drop 는 공간/ 객체(index, 제약조건등) / 내용과 구조 삭제 / 되돌릴수 없음


데이터 딕셔너리와 데이터 딕셔너리 뷰

데이터 딕셔너리 : 데이터베이스 자원을 효율적으로 관리하기 위한 다양한 정보를 저장하는 시스템 테이블

-테이블을 생성하거나 변경하는 때에 자동으로 갱신되는 테이블

-사용자가 직접 수정삭제 할 수 없음


데이터 딕셔너리 뷰:데이터 딕셔너리의 내용을 사용자가 이해할 수 있는 내용으로 변환하여 제공

-user_데이터 딕셔너리 뷰(user_tables 데이터 딕셔너리 뷰, …..)

자신의 계정이 소유한 객체 등에 관한 정보를 조회

ex) desc user_tables;

ex) select table_name from user_tables order by table_name desc;


-all_데이터 딕셔너리 뷰

자신 계정 소유 또는 권한을 부여 받는 객체 등에 관한 정보를 조회

ex) desc all_tables;

ex) select owner, table_anme from all_tables;


-dba_데이터 딕셔너리 뷰

데이터 베이스 관리자만 접근 가능한 객체 등의 정보를 조회(DB의 모든 객체 접근 가능)

시스템 권한 가진 사람만 접근가능

ex)select owner, table_name from dba_tables;



13강

insert

ex)insert into dept03 (컬럼명들) values (값들);

ex)insert into dept03 select * from dept;

insert할때 컬럼명을 생략가능하기고 가능함


다중 테이블에 다중행 입력

insert all 서브쿼리의 결과를 조건없이 여러테이블에 입력 가능(서브쿼리의 컬럼명과 테이블의 컬럼명이 동일해야 함)

ex)insert all 

into emp_hir values(empno, ename, hiredate)

into emp_mgr values(empno, ename, mgr)

select empno, ename, hiredate, mgr  from emp where deptno = 20;


ex)insert all

when hiredate>’1982/01/01’ then into emp_hir01 values(empno, ename, hiredate)

when sal >= 2000 then into emp_sal values(empno, ename, sal)

select empno, ename, hiredate, sal from emp;


update

update문에 where절을 주지않으면 모든행의 값이 변경되므로 주의하기

ex) update emp01 set deptno =40 where deptno = 10;

ex) update dept01 set loc=(select loc from dept01 where deptno=40) where deptno =20;


delete

delete문에 where절을 주지않으면 모든행이 삭제되므로 주의하기

ex) delete from emp01 where deptno =30;

ex) delete from emp01 where deptno = (select deptno from dept where dname=’sales’);


테이블을 합병하는 merge 문

구조가 같은 두개의 테이블을 하나의 테이블로 합치는 기능

기존 테이블에 자료가 존재하는 경우 - 새로운 값으로 갱신(update)

기존 테이블에 자료가 존재하지 않는 경우 - 새로운 행으로 추가(insert)


ex) merge into emp01

using emp02

on(emp01.empno = emp02.empno)

when matched then

update set

emp01.ename=emp02.ename

emp01.job=emp02.job

emp01.mgr = emp02.mgr

emp01.hiredate = emp02.hiredate

when not matched then

insert values(emp02.empno, emp02.ename, emp02.job, emp02.mgr, emp02.hiredate);

오라클 강의(8강~10강)

Posted by HULIA(휴리아)
2018. 6. 3. 01:36 백엔드개발/데이터베이스

8강 그룹함수(하나 이상의 행을 그룹으로 묶어 연산하여, 하나의 결과를 나타내는 함수)

종류 sum, avg, max, min, count

특징 :컬럼값이 null인 경우는 제외하고 계산한다


단일행 함수는 

각행에 대해서 함수의 결과가 구해지기 때문에 결과가 여러개의 로우로 구해짐


sum(해당 컬럼 값들에 대해 총합을 구하는 함수)

avg(해당 컬럼의 평균값을 구하는 함수)

max(해당 컬럼의 최대값을 구하는 함수)

min(해당 컬럼의 최소값을 구하는 함수)

count(테이블에서 조건을 만족하는 행의 개수를 반환하는 함수)

->중복되어 있는 것까지 카운트 되니 -> distinct로 중복해 제거를 해서 구해야함

ex)select count(distinct job) 업무수 from emp;



****select문에 그룹함수를 사용하는 경우

그룹 함수를 적용하지 않은 단순컬럼은 올수 없음

ex) min함수를 쓰고 ename를 쓰면 오류가 발생 -> 서브쿼리를 이용해서 둘다 나오도록 한다



group by절

특정 컬럼을 기준으로 테이블에 존재하는 행들을 그룹별로 구분하기 위해 사용

형식 : select 컬럼명, 그룹함수 from 테이블명 where 조건 group by 컬럼명;

즉, 그룹 함수를 쓰되 어떤 컬럼값을 기준으로 그룹 함수를 적용할지 기술

***컬럼의 별칭을 사용할 수 없고 반드시 컬럼명을 기술



사원 테이블을 부서번호로 그룹 짓는 경우

ex)select deptno from emp group by delptno;


부서별 평균 급여를 구하는 경우

ex)select deptno, avg(sal) from emp group by deptno;

*****그룹함수를 쓸때 단순컬럼을 올수 없다고 했는데 group by와 동일한 컬럼은 매치가 되서 가능하다


부서별 급여 총액과 평균 급여를 구하는 경우

select delptno, sum(Sal), avg(sal) from emp group by deptno;

부서별 최대 급여와 최소 급여를 구하는 경우

select delptno, max(Sal), min(sal) from emp group by deptno;

부서별 사원의 수와 커미션을 받는 사원의 수를 계산하는 경우

select delptno, count(*), count(comm) from emp group by deptno;


having 절

group by 절에 의해 생성된 결과 값 중 원하는 조건에 부합하는 자료만 보고자 할때 사용

즉, 그룹의 결과를 제한

그룹함수을 적용해서 나온 결과값 중에서 원하는 조건에 부합하는 자료만 산출 할때 사용

그룹함수


where절은 조건을 사용하여 결과를 제한

테이블에서 특정 조건에 부합하는 자료만을 검색할때 사용

단순컬럼


ex) 그룹 지어진 부서별 평균 급여가 200이상인 부서의 번호와 부서별 평균 급여를 출력하는 경우

select deptno, avg(Sal) from emp group by deptno, having avg(sal) >= 2000;


ex) 부서의 최대값과 최소값을 구하되, 최대 급여가 2900이상인 부서만 출력하는 경우

select deptno, max(sal), min(sal) from emp group by deptno having max(sal) > 2900;



9장

자동으로 소계/합계를 구해주는 함수

-rollup(주어진 데이터들의 소계를 구해주는 함수)

-cube(주어진 테이터들의 천체 총계까지 구해주는 함수)

select deptno, job, count(*), sum(sal) from emp group by rollup(deptno, job);

select deptno, job, count(*), sum(sal) from emp group by job, rollup(deptno);

select deptno, job, count(*), sum(sal) from emp group by deptno, rollup(job);


select deptno, job, count(*), sum(sal) from emp group by cube(deptno, job);


계층형 쿼리 : 데이터를 조회할 때 상위와 하위 관계를 표시할때 사용

-> connect by 절 이용(계층 구조에서 각 행의 연결관계를 설정 -> emp

-> start with가 시작위치이며 보통 사장님

ex) select level, lpad(empno, level*4, ‘*’) as empno01, substr(sys_connect_by_path(ename,’-’),2) ename02, 


ename, mgr from emp

start with mgr is null

connect by prior empno = mgr;


***lpad(empno, level*4, ‘*’) : level*4 바이트 길이로 출력하되, 왼쪽 빈자리는 *로 채우라는 뜻

***level은 현재테이블에는 존재하지 않는 컬럼, 오라클의 모든 sql에서 사용할 수 있는 것으로 해당 데이터가 몇번째 단


계이냐를 의미


level별로 급여 합계, 사원의 명수 구하는 쿼리

ex)select level, sum(sal) total, count(empno) cnt from emp

start with mgr is null

connect by prior empno = mgr

groub by level

order by level;


오라클 11g에 새롭게 추가된 함수

-listagg(정규화된 테이블을 비정규화해서 출력하는 함수 -> 세로로 출력되는 행을 가로로 보기를 원하는 경우)

ex)set lines 200

col listagg for a40

select delptno, listagg(ename, ‘**’) within group (order by hiredate) “listagg” from emp group by deptno;

**는 구분자, order by는 규칙을 적어주면 됨(여기는 hiredate)


-pivot(row단위를 column단위로 변경해주어, 사용자가 한눈에 보기 편한 레포트를 만들어 주는 함수)


ex) select deptno, count(0) from emp group by deptno order by deptno;

-> select deptno, count(decode(job, ‘clerk’,0)) “cleark”,count(decode(job, ‘manager’,0)) “manager”,count


(decode(job, ‘president’,0)) “president”,count(decode(job, ‘analyst’,0)) “analyst”,count(decode(job, 


‘salesman’,0)) “salesman”  from emp group by deptno order by deptno;

->select * from (select deptno, job, empno from emp)

pivot ( 

count(empno) 

for job in(‘clerk’ as “clerk”, ‘manager’ as “manager”, ‘president’ as “president”, ‘analyst’ as 


“analyst”, ‘salesman’ as “salesman”)

)

order by deptno;


부서별/담당업무별 인원수와 급여 합계 출력

select * from (select deptno, job, empno, sal from emp)

pivot ( 

count(empno) as cnt, sum(nvl(sal, 0)) as sal

for job in(‘clerk’ as “clerk”, ‘manager’ as “manager”, ‘president’ as “president”, ‘analyst’ as 


“analyst”, ‘salesman’ as “salesman”)

)

order by deptno;


-rank

주어진 컬럼값의 그룹에서 값의 순위를 계산한 후 순위를 출력해 주는 함수

->집계용 문법 rank(조건값) within group(order by 조건컬럼명 asc|desc )

->분석용 문법 rank() over(order by 조건컬럼명 asc|desc )


ex) 이름 오름차순 정렬시 순위

select rank(‘smith’) within group (order by ename) “RANK” from emp;


ex) 10번 부서에 속한 직원들의 사번과 이름, 급여 해당부서내의 급여 순위를 출력하는 경우

 select empno, ename, sal, rank() over (order by sal desc) “순위” from emp where deptno = 10;




10강

조인 : 여러 테이블에 흩어져 있는 정보 중에서 사용자가 필요한 정보만 가져와서 가상의 테이블처럼 만들어서 결과를 보


여줌


조인의 종류

-cartesian product(카티션 곱)

where절에 공통되는 컬럼에 의한 조인이 발생되지 않아 모든 데이터가 검색 결과로 나타나는 경우

아무런 의미를 가지지 못함-> 그냥 보여만 주기 때문에

ex)select * from emp, dept;


-equi join(이퀄조인)

가장 많이 사용

조인 대상이 되는 두 테이블에서 공통적으로 존재하는 컬럼의 값이 일치되는 행을 연결하여 결과를 생성하는 조인 기법

ex)select * from emp,dept where  emp.deptno = dept.deptno;

->select ename, dname from emp,dept where emp.deptno = dept.deptno;(컬럼앞에 어느테이블인지 명시해야 함)

->select ename, dname from emp,dept where emp.deptno = dept.deptno and ename=’scott’;

->select ename, dept.dname, emp.deptno from emp,dept where  emp.deptno = dept.deptno and ename=’scott’;(컬럼앞


에 어느테이블인지 명시해야 함, 단 중복되지 않는 컬럼일경우는 테이블명 안써도 된다)

->select e.ename, d.dname, e.deptno. d.deptno from emp e, dept d where  e.deptno = d.deptno and e.ename=’scott’


;(테이블의 별칭을 부여한 후, 컬럼앞에 소속테이블을 지정할 경우는 반드시 별칭을 붙어야 함-원래 테이블명을 쓰면 안


됨)


-no-equi join(non이퀄조인)

동일 컬럼이 없이 다른 조건을 사용하여 조인(크거나 작거나 등의 경우)

->이퀄조인이 서로같은 조건을 가진 데이터를 조인해오는 방식이라면 non이퀄조인은 크거나 작거나 하는 경우 조인 방법


사원의 급여가 몇등급인지 살펴보는 경우( salgrade에는 등급별로 low와 hign가 입력되어 있음)

select ename, sal, grade from emp, salgrade where sal >= losal and sal<=hisal;

select ename, sal, grade from emp, salgrade where sal between losal and hisal;


한명의 사원 이름과 소속 부서명, 급여의 등급을 출력하는 경우

(사원 테이블에는 부서이름과 급여 등급은 없고, 부서번호와 급여만 있음)

-> 부서명 이퀄조인, 급여등급은 non이퀄조인

select e.ename, d.dname. s.grade from emp e, dept d, salgrade s where e.deptno = d.deptno and e.sal between 


s.losal and s.hisal


-self join

자기자신과 조인

ex) 특정사원의 매니저가 누구인지 알아낼때 매니저 테이블은 중복이기 때문에 만들지 않고 사원테이블에서 모두 저장되


어 있음(별칭으로 두개의 테이블인 것처럼 사용해야 함)

select employee.ename ||’의 매너지는 ‘ || manager.ename ||’입니다.’ from emp emplyee, emp manager where 


employee.mgr = manager.empno;


-outer join

조인조건에 만족하지 않는 행도 나타내는 조인 기법

2개 이상의 테이블에 조인될때, 어느 한쪽의 테이블에는 해당하는 데이터가 존재함

-> 다른 쪽 테이블에는 데이터가 존재하지 않는 경우 그 데이터가 출력되지 않는 문제를 해결하기 위해서 사용되는 조인 


기법


self 조인에서 사장은 mgr가 null이므로 제외되어서 self join 결과값이 나온다

하지만 사장도 나오게 할때 사용하는 것이 outer join 임 

ex)select employee.ename ||’의 매너지는 ‘ || manager.ename ||’입니다.’ from emp emplyee, emp manager where 


employee.mgr = manager.empno(+);

정보가 부족한 컬럼뒤에 (+)를 붙여 줌

오라클 강의 정리(1강~7강)

Posted by HULIA(휴리아)
2018. 6. 3. 01:34 백엔드개발/데이터베이스

1강

데이터베이스 : 유용한 데이터의 집합(통합(똑같은 데이터가 중복되어있지 않음), 저장(컴퓨터가 접근할 수 있는 기억장


치에 저장된 데이터), 운영(조직의 존재 목적이나 기능을 수행하는데 없어서는 안되는 데이터), 공용(여러 사용자들이 서


로 다른 목적으로 사용하는 공유 가능한 데이터) 데이터의 집합)

검색에 용이하게 데이터를 저장하도록 한 것

검색, 수정, 삭제까지 용이

특징:수시적이고 비정형적인 질의 실시간 처리 -> 실시간 접근성

새로운 데이터 삽입, 삭제, 갱신으로 내용이 변하고 현재의 정확한 데이터를 유지->계속적인 변화

같은 내용의 데이터를 여러 사람이 서로 다른 방법으로 동시에 공유 ->동시공유

데이터 레코드들은 주소나 위치에 의해서가 아니라 사용자가 요구하는 데이터의 내용에 따라 참조해야 함-> 내용에 의한 


참조


파일시스템

중복성(한 시스템내의 데이터가 중복 저장, 관리되어 데이터 불일치->일관성, 보안성, 경제성, 무결성등의 문제 발생), 


종속성(응용프로그램과 데이터간의 상호의존 관계)


시퀀스 자동으로 증가되는 값을 만들 수 있는 기능 -> 보통 기본키로 사용함

start with

increment by

maxvalue or nomaxvalue

minvalue or nominvalue

cycle or nocycle

cache or nocache


select sequence_name, min_value, max_value, increment_by, cycle_flag from user_sequences;


시퀀스값을 알아보기 

nextval(시퀀스의 다음값을 알아내기 위해서 사용) -> currval(시퀀스의 현재 값을 알아내기 위해서 사용)


nextval, currval을 사용할 수 있는 경우

-서브쿼리가 아닌 select문

-insert문의 select절

-insert문의 value절

-update문의 set절


nextval, currval을 사용할수없는 경우

-view의 select절

-distinct 키워드가 있는 select문

-group by, having, order by 절이 있는 select문

-select, delete, update의 서브쿼리

-create table, alter table명령의 defalut 값


동의어(synonym) 테이블의 소유자와테이블을 줄여서 쓰는 있는 기능

->비공개 동의어(개별사용자대상), 공개동의어(dual같은경우)


create synonym

drop synonym



파일시스템의 문제점

1. 중복성(한 시스템내의 데이터가 중복 저장, 관리되어 데이터 불일치->일관성, 보안성, 경제성, 무결성등의 문제 


발생)

2. 종속성(응용프로그램과 데이터간의 상호의존 관계)

데이터 베이스의 특징

1. 수시적이고 비정형적인 질의 실시간 처리 -> 실시간 접근성

2. 새로운 데이터 삽입, 삭제, 갱신으로 내용이 변하고 현재의 정확한 데이터를 유지->계속적인 변화

3. 같은 내용의 데이터를 여러 사람이 서로 다른 방법으로 동시에 공유 ->동시공유

4. 데이터 레코드들은 주소나 위치에 의해서가 아니라 사용자가 요구하는 데이터의 내용에 따라 참조해야 함-> 내용


에 의한 참조





sql*plus 명령어 정리

-편집실행 : list(버퍼의 내용을 나타내기 위한 명령), run(버퍼내용보여주고 실행), @(파일의 쿼리실행), /(버퍼저장된 


쿼리 실행)

-파일:edit, save, get, spool 갈무리(내용 모두 저장)

-데이터베이스:connect. host(도스프롬프트로), exit

-시스템조작 : column format(컬럼에 저장된 데이터의 출력 형식을 변경을 위한 명령어)


데이터 딕셔너리 : 데이터베이스와 관련된 모든 정보 제공

ex) select * from TAB(table의 약자);


데이블구조 확인 명령어 : DESC


날짜형은 where HIREDATE >=1982/08/01을 쓴다


조건 논리 연산자

and,or, not(조건을 만족하지 못하는 것만 검색)

ex)where not deptno = 10 

where deptno != 10


between and 연산자

where sal >= 2000 and sal <= 3000;

where sal between 2000 and 3000

where sal not between 2000 and 3000;


in 연산자

where comm not in (300,500,600);


like 검색

%:문자가 없거나, 하나 이상의 문자에 어떤 값이 와도 상관없음

_:하나의 문자가 어떤 값이 와도 상관없음


not like도 가능하다


escape

like연산자에게 사용하는 와일드문자(%,_) 자체를 포함한 문자열을 검색할 때, 와일드 문자를 일반문자처럼 취급하기 위


한 옵션

ex)where ename like ‘%\%%’ ESCAPE ‘\’;


null값 체크

where comm IS NULL;

where comm IS NOT NULL;


order by(정렬)

order by comm ASC | DESC(생략시 ASC기본)


DISTINCT 중복값들이 출력되지 않도록 사용

ex) select distict comm ~~~


연결연산자

ex) select ename || ‘is a’ || job  

함수

-단일행함수(여러건의 데이터를 한번에 한번씩만 처리), 그룹함수(여러건의 데이터를 동시에 입력받아서 결과값 한건을 


만들어 주는 함수)


단일행함수

-숫자함수

-문자처리 함수

-날짜

-형변환

-일반


DUAL 테이블

-한행으로 결과를 출력하기 위한테이블

-산술 연산이나 가상 컬럼 등의 값을 한번만 출력하고 싶을때 많이 사용

ex)select 24*60 form dual;


숫자함수

-ABS(절대값), FLOOR(소수점아래를 버림), ROUND(반올림), TRUNC(특정 자리숫에 잘라내는 함수), MOD(나머지 숫 구하는 


함수)


문자처리함수

-UPPER(대문자로변환), LOWER(소문자로변환), INITCAP(이니셜만대문자로), LENGTH(문자길이), LENGTHB(BYTE의길이), 


INSTR(특정문자의 위치 구함), SUBSTR(일부 문자열을 추출), LPAD(특정기호로 채우는 함수), RPAD, LTRIM(공백문자 


삭제), RTRIM, TRI<(첫번째와 마지막의 문자열 잘라내기)


입사년도 출력할때

select ename, 19||substr(hiredate,1,2) 년도, substr(hiredate, 4, 2) 달 from emp;


날짜함수 

날짜 + 숫자 : 그 날짜로부터 기간만큼 지난 날짜를 계산

날짜 - 숫자 : 그 날짜로부터 기간만큼 이전 날짜를 계산

날짜 - 날짜 : 두 날짜 사이의 기간을 계산

ex) sysdate-1, sysdate+1


-Date형에 사용되는 함수

111111sysdate : 시스템의 날짜를 반환하는 함수

ex)select system from dual;


22222months_between

두 날짜사이의 개월수를 구하는 함수

ex)months_between(sysdate, hiredate)

소수점이하 자리 나올때 round함수와 trunc함수 이용


333333add_months

개월수를 더하는 함수

ex)add_months(hiredate,4) : 4개월 추가


next_day

해당날짜부터 시작하여 명시된 요일을 만나면, 해당하는 날짜를 반환하는 함수

ex) next_day(Date, 요일) 

-한글요일 : 일, 월, 화,

-영문요일 : SUN, MON, TUE 

-문자대신 숫자로 요일을 표현가능 7=일요일, 1=월요일, 2=화요일 등등


‘금’이라고 요일 입력시 오류사항 발생시

alter session set nls_language=korean; 



last_day

해당 달의 마지막 날짜를 반환하는 함수



-형변환 함수

숫자, 문자, 날짜 데이터형을 다른 데이터형으로 변환하는 함수


number-> to_char() -> character -> to_date() -> date

date -> to_char() -> character -> to_number() -> number


ex) 입사일과 요일까지 함께 출력하는 경우

to_char(hiredate, ‘YYYY/MM/DD DAY’) -> 1980/12/12 수요일

to_char(hiredate, ‘YY/MON/DD DY’) -> 80/12월/12 수

to_char(hiredate, ‘YYYY/MM/DD, HH24:MI:SS’) -> 2013/12/12, 23:24:55



ex) 지역별 통화 기호를 붙이고, 천단위마다 콤마 붙여서 출력

to_char(sal, ‘L999,999’) -> 자릿수 채우지 않음

to_char(sal, ‘L000,000;) -> 자릿수 채움(0으로)



날짜 기본 형식-> YY/MM/DD 형식

ex) 올해 몇일이 지났나 계산

trunc(sysdate-to_date(‘2016/01/01’, ‘YYYY/MM/DD’))


to_number(‘20,000’,’99,999’) - to_number(‘10,000’,’99,999’) 



nvl 함수 :null을 0또는 다른 값으로 변환함수 

select sal*12+comm nvl(comm, 0), sal*12+nvl(comm,  0) from emp


decode함수 : 여러가지 경우에 대해서 선택할 수 있도록 하는 기능을 제공

형식 decode(표현식, 조건1, 결과1, 조건2, 결과2, 조건3, 결과3 ….)

특징:조건이 일치하는 경우에만 적용


select ename, deptno, decode(deptno, 10, ‘accounting’, 20, ‘research’, 30, ‘sales’) AS DNAME from emp;


case함수 : 여러가지 경우에 하나를 선택하는 함수

case 표현식 when 조건1 then 결과1

when 조건2 then 결과2

….

else 결과 

end

특징 :다양한 비교연산자를 이용하여 조건 제시, 범위 지정

앱)자바 batch pid 파일 만들기

Posted by HULIA(휴리아)
2018. 4. 27. 14:34 백엔드개발/자바스프링
==main==
//pid file creation
String pidDir = System.getProperty("pid.dir", jobmgrHome+System.getProperty("file.separator")+"pid")

try{
     PidUtil pidUtil = new PidUtil(pidDir, "jobmanager.pid");
if(pidUtil.existPidFile()){
     LoggerUtil.error(logger, "already running...");
     String pid = pidUtil.readPidFile();
     LoggerUtil.error(logger,"pid: "+pid);
     System.exit(0);
}

String pid = pidUtil.createPidFile();

LoggerUtil.info(logger, "PID file created. <pid : "+pid + ">");
}catch(IOException ie){
    ie.printStackTrace();
}

}




==PidUtil==
public class PidUtil{
       private String pidDir;
       private String pidFileName;

       pulbic PidUtil(String pidDirectory, String pidFileName) {
     makeDir(pidDirectory);
     this.pidFileName = pidFileName;
     pidDir = new File(pidDirectory).getPath();
}
  

    public File getPidFile(){
           return new File(pidDir + System.getProperty("file.separator")+pidFileName);
    }

    public String getPID() throws IOException{
    String str = ManagementFactory.getRuntimeMXBean().getName();
    return str.split("@")[0];
}

public String createPidFile() throws IOException{
      String pid = null;
      pid = getPID();

      //파일 생성
      writePID(getPidFile(), pid);
      return pid;
}

public void deletePID(){
      delete(getPidFile());
}

public String readPidFile() throws IOException{
         return readFile(getPidFile());
}

public boolean existPidFile() throws IOException{
        boolean isRet = false;
//pid파일 존재하는지 체크한다.
//파일 존재할 경우 pid 정보를 읽는다
//pid가 -1인 경우 실패한 경우이므로 pid파일이 존재하지 않는다는 결과를 반환한다
//pid가 -1가 아닌 경우 windows 2000이면 tilst로 해당 pid를 체크하고 windows 2000이상이면 tasklist로 체크한다
      if(getPidFile().exist()){
          String pid = readFile(getPidFile());
         if( !pid.equals("-1")){
               String winOsName = System.getProperty("os.name");
              boolean isWindow = winOsName.startWith("Windows");
              if( isWindow) {//windows
                  if(winOsName.indexOf("2000") > -1) {
      if(hasProcess(pid, "tlist")){
              isRet = true;
       }
}else{
       if(hasProcess(pid, "tasklist")){
               isRet = false;
       }
}

              }else{//linux & unix
                 if(hasProcess(pid, "ps -p "+ pid)){
       isRet = true;
}
             }
         }
      }
      return isRet;
}

private boolean hasProcess(String pid, String checkCMD) throws IOException{
       boolean isRet = false;
       Process p_start = Runtime.getRuntime().exec(checkCMD);
       BufferedReader stdout = new BufferedReader (new InputStreamReader(p_start.getInputStream()));
        String output;
        while((output=stdout.readLine())!=null){
         if(output.indexOf(pid) > -1 && (output.startsWith("java") || output.indexOf("java") > -1)){
               isRet = true;
               break;
           }
}
p_start.destory();
return isRet;
}

private void writePID(File file, String pid) throws IOException{
     FileWriter fw = null;
     try{
           fw=new FileWriter(file);
           fw.write(pid);
           fw.close();
       }catch (IOException ioe){
            throw ioe;
       }finally{
            if(fw != null){
                 fw.close();
            }
       }
}

public synchronized void makeDir(String path){
        File dir = new File(path);
        if( !dir.exists()){
                dir.mkdirs();
        }
}

public String readFile(File file) throws IOException{
        FileReader fileReader = null;
        String s = null;
        try{
            fileReader = new FileReader(file);
            s = readReader(fileReader);
        } finally{
             if( fileReader != null){
                   fileReader.close();                
             }
        }
         return s;
}

public void delete(File file){
         if(file != null){
                 if(file.exists()){
                      file.delete();
                 }
         }
}


public String readReader(Reader input) throws IOException{
 try{
     StringBuffer buf = new StringBuffer();
     BufferedReader in = new BufferedReader(input);
     int ch;
     while((ch=in.read())!= -1){
             buf.append((char)ch);
     }
     return buf.toString();
}finally{
      input.close();
}

}
}


===우분투 스크립트(메인 스크립트)
#!/bin/bash

NAME="batch"
DEFAULT = "opt/~/env.sh"
DESC="~~~~Batch Application Serve for $NAME"

#check privilegs
if [ ' id -u' -ne 0 ]; then
     echo "You need root privileges to run this script"
     exit 1
fi

# Make sure wildfly is started with system locale
if [ -r  /etc/default/locale ]; then
           . /etc/default/locale
           export LANG
fi

# Overwrite settings from default file
if [ -f "$DEFAULT" ]; then
              . "$DEFAULT:
fi

# Setup the JVM
if [ -z "$JAVA" ]; then
          if [ -n "$JAVA_HOME" ]; then
                  JAVA="$JAVA_HOME/bin/java"
          else
                   JAVA="java"
          fi
fi


# Check if wildfly is installed
if [ ! -f "$JOBMGR_HOME/JobMnager.jar" ]; then
           log_failure_msg "$NAME is not installed in \"$JOBMGR_HOME\""
            exit 1
fi

if [ -z "$JOBMGR_USER" ]; then
            JOBMGR_USER=root
fi

# Check wilfly user
id $JOBMGR_USER > /dev/null 2>&1
if [ $? -ne 0 -o -z "$JOBMGR_USER" ]; then
            log_failure_msg "User \"$JOBMGR_USER\" does not exist..."
            exit 1
fi

# Check owner of JOBMGR_HOME
if [ ! $(stat -L -c "%U" "$JOBMGR_HOME") = $JOBMGR_USER ]; then
         log_failure_msg "The user \"$JOBMGR_USER\" is not owner of \"$JOBMGR_HOME\""
         exit 1
fi


# The amount of time to wait for startup
if [ -z "$STARTUP_WAIT" ]; then
          STARTUP_WAIT=120
fi


# The amount of time to wait for shutdown
if [ -z "$SHUTDOWN_WAIT" ]; then
            SHUTDOWN_WAIT=120
fi


# Location to keep the console log
if [ -z "JOBMGR_CONSOLE_LOG" ]; then                      JOBMGR_CONSOLE_LOG="$JOBMGR_HOME/logs/console.log"
fi

export JOBMGR_CONSOLE_LOG

touch $JOBMGR_CONSOLE_LOG
chown $JOBMGR_USER $JOBMGR_CONSOLE_LOG

# Location to set the pid file
JOBMGR_PIDFILE="$JOBMGR_HOME/pid/jobmanager.pid"
export JOBMGR_PIDFILE

# Helper functions to check status of Jboss services
check_status() {
         echo "pidofproc -p \"$JOBMGR_PIDFILE\" \"$JAVA\" >/dev/null 2>&1"
         pidofproc -p "$JOBMGR_PIDFILE" "$JAVA" >/dev/null 2>&1
}


case "$1" in
  start)
       log_daemon_msg "Starting $DESC"
       check_status
       status_start=$?
       if [ $status_start -eq 3 ]; then
              cat /dev/null > "$JOBMGR_CONSOLE_LOG"
              source $DEFAULT; java $JOBMGR_OPTS $JAVA_OPTS -jar JobManager.jar > ${JOBMGR_CONSOLE_LOG} 2>&1 &"

               count=0
               launched=0
               until [ $count -gt $STARTUP_WAIT ]
                do
                           if check_status; then
                                     launched=1
                                     break
                           fi
                           sleep 1
                            count=$((count + 1));
                  done
                 
                   if [ $launched -eq 1 ]; then
                              chown $JOBMGR_USER $(dirname "$JOBMGR_PIDFILE") || true
                   fi
                  
                   if check_status; then
                               log_end_msg 0
                   else
                               log_end_msg 1
                    fi

                    if [ $launched -eq 0 ]; then
                                log_warning_msg "$DESC hasn't started within the timeout allowed"
                                log_warning_msg "please review file \"$JOBMGR_CONSOLE_LOG\" to see the status of the service"
                     fi
       elif [ $status_start -eq 1 ]; then
                log_failure_msg "$DESC is not running but the pid file exists"
                exit 1
       elif [ $status_start -eq 0 ]; then
                log_success_msg "$DESC (already running)"
        fi   
      
     ;;



  stop)
        check_status
        status_stop=$?
        if [ $status_stop -eq 0 ]; then
                  read kpid < "$JOBMGR_PIDFILE"
                  log_daemon_msg "Stopping $DESC"
                   if check_status; then
                               kill $kpid
                   fi

                    rm "$JOBMGR_PIDFILE"
                   
                     log_end_msg 0
          elif [ $status_stop -eq 1 ]; then
                    log_action_msg "$DESC is not running but the pid file exists, cleaning up"
                    rm -f $JOBMGR_PIDFILE
           elif [ $status_stop -eq 3 ]; then
                     log_action_msg "$DESC is not running"
           fi
      ;;

  restart)
         check_status
         status_restart=$?
         if [ $status_restart -eq 0 ]; then
                       $0 stop
         fi
         $0 start

         ;;

  status)
          check_status
          status=$?
          if [ $status -eq 0 ]; then
                 read pid < $JOBMGR_PIDFILE
                 log_action_msg "$DESC is running with pid $pid"
                 exit 0
           elif [ $status -eq 1 ]; then
                 log_action_msg "$DESC is not running and the pid file exists"
                 exit 1
            elif [ $status -eq 3 ]; then
                 log_action_msg "$DESC is not running"
                 exit 3
            else
                  log_action_msg "Unable to determine $NAME status"
                 exit 4
            fi

  ;;

  *)
   log_action_msg "Usage: $0 {start|stop|restart|status}"
   exit 2
   ;;

esac

exit 0


===우분투 스크립트(env 스크립트)
env.sh파일임

#!/bin/bash


export SERVER_HOME="/opt"
export SERVER_NAME="Batch"
export JOBMGR_HOME="${SERVER_HOME}/${SERVER_NAME}"
export SERVER_TYPE="Dev"


LIB_DIR=${JOBMGR_HOME}/lib

export JAVA_HOME=/opt/java
export CLASSPATH=.:/opt/java/jre/lib:$LIB_DIR

export PATH=$JAVA_HOME/bin:$PATH

if [ "x$JOBMGR_OPTS" ="x" ]; then
     JOBMGR_OPTS="-Djobmanager"
     JOBMGR_OPTS="$JOBMGR_OPTS -Djobmgr.home=${JOBMGR_HOME}"
JOBMGR_OPTS="$JOBMGR_OPTS -Dserver.type=${SERVER_TYPE}"
JOBMGR_OPTS="$JOBMGR_OPTS -Djobmgr.resourcepath=file:${JOBMGR_HOME}/conf/"
fi

if [ "x$JAVA_OPTS" = "x" ]; then
        JAVA_OPTS="-server"
        JAVA_OPTS="$JAVA_OPTS -noverify"
         JAVA_OPTS="$JAVA_OPTS -Xms512m"
         JAVA_OPTS="$JAVA_OPTS -Xmx5124m"
         JAVA_OPTS="$JAVA_OPTS -XX:NewRatio=7" #전체 메모리의 3/4를 old generation 영역으로 지정
         JAVA_OPTS="$JAVA_OPTS -XX:PermSize=64m"
         JAVA_OPTS="$JAVA_OPTS -XX:MaxPermSize=128m"
         JAVA_OPTS="$JAVA_OPTS -XX:+UseParNewGC"  #Young Generation 영역에서 parrel로 GC를 수행하도록 한다
         JAVA_OPTS="$JAVA_OPTS -XX:+UseConcMarkSweepGC"  #CMS Controller 활성화
         JAVA_OPTS="$JAVA_OPTS -XX:+CMSParallelRemarkEnabled"  #Remark 단계를 Parallel로 동작하도록 지정
         JAVA_OPTS="$JAVA_OPTS -XX:CMSFullGCsBeforeCompaction=0" # Concurrent Full GC는 항상 Compaction을 수반하도록 한다
         JAVA_OPTS="$JAVA_OPTS -XX:+ExplicitGCInvokesConcurrent"  #system.gc() 하더라도 CMS GC를 실행하도록 한다
         JAVA_OPTS="$JAVA_OPTS -verbose:gc"
         JAVA_OPTS="$JAVA_OPTS -Xloggc:$JOBMGR_HOME/gclog/gc_'date "+%Y%m%d%H"'.log"
         JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCDetails"  # GC에 대한 상세출력
         JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCDateStamps"  # 시스템 날짜 기록
         JAVA_OPTS="$JAVA_OPTS -XX:+PrintHeapAtGC"
         JAVA_OPTS="$JAVA_OPTS -XX:+HeapDumpOnOutOfMemoryError"
         JAVA_OPTS="$JAVA_OPTS -XX:HeapDumpPath=$JOBMGR_HOME/gclog/java_pid.hprof"
         JAVA_OPTS="$JAVA_OPTS -Djava.security.egd=file:/dev/./urandom"
         JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote"
         JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.port=8286"
         JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.ssl=false"
         JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.authenticate=false"
         JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true"
         JAVA_OPTS="$JAVA_OPTS -Dsun.lang.ClassLoader.allowArraySyntax=true"
         JAVA_OPTS="$JAVA_OPTS -Dsun.net.inetaddr.ttl=10"
         JAVA_OPTS="$JAVA_OPTS -Dsun.net.inetaddr.negative.ttl.10"
fi


export JAVA_OPTS JOBMGR_OPTS

npm -g 옵션으로 설치시에 폴더위치

Posted by HULIA(휴리아)
2018. 4. 23. 01:21 백엔드개발/NODEJS

윈도우:

C:\Users\user\AppData\Roaming\npm\node_modules

앱)jar 파일 실행하는 법 정리

Posted by HULIA(휴리아)
2018. 4. 13. 10:16 백엔드개발/자바스프링
기본 실행방법
java -jar jar파일이름.jar

옵션 넣어서 실행방법
java -Dserver.type=Local -jar jar파일이름.jar

참고로 -jar 옵션보다 -D가 뒤에 있다면 jar파일 실행할때 옵션값을 못 가지고 가게됨

스프링 프레임워크 메인 프로젝트 Spring framework project

Posted by HULIA(휴리아)
2018. 4. 5. 01:58 백엔드개발/자바스프링

Main Projects

1.Spring IO Platform

2.Spring Boot

3.Spring Framework

4.Spring Cloud Data Flow

5.Spring Cloud

6.Spring Data

7.Spring Integration

8.Spring Batch

9.Spring Security

10.Spring Hateoas

11.Spring Rest Docs

12.Spring Social

13.Spring Amqp

14.Spring Mobile

15.Spring For Android

16.Spring Web Flow

17.Spring Web Services

18.Spring Ldap

19.Spring Session

20.Spring Shell

21.Spring Flo

22.Spring Kafka

23.Spring Statemachine



Spring IO Platform

https://platform.spring.io/platform/

Spring IO is a cohesive, versioned platform for building modern applications. It is a modular, enterprise-grade distribution that delivers a curated set of dependencies while keeping developers in full control of deploying only the parts they need. Spring IO is 100% open source, lean, and modular.


The Spring IO platform includes Foundation Layer modules and Execution Layer domain-specific runtimes (DSRs). The Foundation layer represents the core Spring modules and associated third-party dependencies that have been harmonized to ensure a smooth development experience. The DSRs provided by the Spring IO Execution Layer dramatically simplify building production-ready, JVM-based workloads. The first release of Spring IO includes two DSRs: Spring Boot and Grails


Features

One platform, many workloads - build web, integration, batch, reactive or big data applications

Radically simplified development experience with Spring Boot

Production-ready features provided out of the box

Curated and harmonized dependencies that just work together

Modular platform that allows developers to deploy only the parts they need

Support for embedded runtimes, classic application server, and PaaS deployments

Depends only on Java SE, and supports Groovy, Grails and some Java EE

Works with your existing dependency management tools such as Maven and Gradle

The Spring IO Platform is certified to work on JDK 7 and 8*

*While the Spring IO Platform supports JDK 7 and 8, many individual Spring projects also support older JDK versions. Please refer to the [individual projects' documentation] (http://spring.io/docs) for the specific minimum requirements.



Spring Boot

https://projects.spring.io/spring-boot/

Takes an opinionated view of building production-ready Spring applications. Spring Boot favors convention over configuration and is designed to get you up and running as quickly as possible.


Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that you can "just run". We take an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration.


Features

Create stand-alone Spring applications

Embed Tomcat, Jetty or Undertow directly (no need to deploy WAR files)

Provide opinionated 'starter' POMs to simplify your Maven configuration

Automatically configure Spring whenever possible

Provide production-ready features such as metrics, health checks and externalized configuration

Absolutely no code generation and no requirement for XML configuration

The reference guide includes detailed descriptions of all the features, plus an extensive howto for common use cases.



Spring Framework

https://projects.spring.io/spring-framework/

The Spring Framework provides a comprehensive programming and configuration model for modern Java-based enterprise applications - on any kind of deployment platform. A key element of Spring is infrastructural support at the application level: Spring focuses on the "plumbing" of enterprise applications so that teams can focus on application-level business logic, without unnecessary ties to specific deployment environments.


Features

Core technologies: dependency injection, events, resources, i18n, validation, data binding, type conversion, SpEL, AOP.

Testing: mock objects, TestContext framework, Spring MVC Test, WebTestClient.

Data Access: transactions, DAO support, JDBC, ORM, Marshalling XML.

Spring MVC and Spring WebFlux web frameworks

Integration: remoting, JMS, JCA, JMX, email, tasks, scheduling, cache.

Languages: Kotlin, Groovy, dynamic languages.


Minimum requirements

JDK 8+ for Spring Framework 5.x

JDK 6+ for Spring Framework 4.x

JDK 5+ for Spring Framework 3.x



Spring Cloud Data Flow

https://cloud.spring.io/spring-cloud-dataflow/

Spring Cloud Data Flow is a toolkit for building data integration and real-time data processing pipelines.


Pipelines consist of Spring Boot apps, built using the Spring Cloud Stream or Spring Cloud Task microservice frameworks. This makes Spring Cloud Data Flow suitable for a range of data processing use cases, from import/export to event streaming and predictive analytics.


The Spring Cloud Data Flow server uses Spring Cloud Deployer, to deploy pipelines onto modern runtimes such as Cloud Foundry, Kubernetes, Apache Mesos or Apache YARN.


A selection of pre-built stream and task/batch starter apps for various data integration and processing scenarios facilitate learning and experimentation.


Custom stream and task applications, targeting different middleware or data services, can be built using the familiar Spring Boot style programming model.


A simple stream pipeline DSL makes it easy to specify which apps to deploy and how to connect outputs and inputs. A new composed task DSL was added in v1.2.


The dashboard offers a graphical editor for building new pipelines interactively, as well as views of deployable apps and running apps with metrics.


The Spring Could Data Flow server exposes a REST API for composing and deploying data pipelines. A separate shell makes it easy to work with the API from the command line.


Platform Implementations

An easy way to get started on Spring Cloud Data Flow would be to follow the platform-specific implementation links from the table below. Each of the implementations evolves in isolation with independent release cadences. It is highly recommended to review the platform-specific reference docs to learn more about the feature capabilities.


Server Type Stable Release Milestone/Snapshot Release

Local Server         1.4.0.RELEASE[docs] 1.5.0.BUILD-SNAPSHOT[docs]

Cloud Foundry Server 1.4.0.RELEASE[docs] 1.5.0.BUILD-SNAPSHOT[docs]

Kubernetes Server     1.4.0.RELEASE[docs] 1.5.0.BUILD-SNAPSHOT[docs]

Apache YARN Server 1.2.2.RELEASE[docs] 1.2.3.BUILD-SNAPSHOT[docs]

Apache Mesos Server 1.0.0.RELEASE[docs] 1.1.0.BUILD-SNAPSHOT[docs]



Spring Cloud

https://projects.spring.io/spring-cloud/

Spring Cloud provides tools for developers to quickly build some of the common patterns in distributed systems (e.g. configuration management, service discovery, circuit breakers, intelligent routing, micro-proxy, control bus, one-time tokens, global locks, leadership election, distributed sessions, cluster state). Coordination of distributed systems leads to boiler plate patterns, and using Spring Cloud developers can quickly stand up services and applications that implement those patterns. They will work well in any distributed environment, including the developer's own laptop, bare metal data centres, and managed platforms such as Cloud Foundry.


Spring Cloud builds on Spring Boot by providing a bunch of libraries that enhance the behaviour of an application when added to the classpath. You can take advantage of the basic default behaviour to get started really quickly, and then when you need to, you can configure or extend to create a custom solution.




Spring Data

https://projects.spring.io/spring-data/

Spring Data’s mission is to provide a familiar and consistent, Spring-based programming model for data access while still retaining the special traits of the underlying data store. 


It makes it easy to use data access technologies, relational and non-relational databases, map-reduce frameworks, and cloud-based data services. This is an umbrella project which contains many subprojects that are specific to a given database. The projects are developed by working together with many of the companies and developers that are behind these exciting technologies.


Features

Powerful repository and custom object-mapping abstractions

Dynamic query derivation from repository method names

Implementation domain base classes providing basic properties

Support for transparent auditing (created, last changed)

Possibility to integrate custom repository code

Easy Spring integration via JavaConfig and custom XML namespaces

Advanced integration with Spring MVC controllers

Experimental support for cross-store persistence


Main modules

Spring Data Commons - Core Spring concepts underpinning every Spring Data project.

Spring Data Gemfire - Provides easy configuration and access to GemFire from Spring applications.

Spring Data JPA - Makes it easy to implement JPA-based repositories.

Spring Data JDBC - JDBC-based repositories.

Spring Data KeyValue - Map-based repositories and SPIs to easily build a Spring Data module for key-value stores.

Spring Data LDAP - Provides Spring Data repository support for Spring LDAP.

Spring Data MongoDB - Spring based, object-document support and repositories for MongoDB.

Spring Data REST - Exports Spring Data repositories as hypermedia-driven RESTful resources.

Spring Data Redis - Provides easy configuration and access to Redis from Spring applications.

Spring Data for Apache Cassandra - Spring Data module for Apache Cassandra.

Spring Data for Apache Solr - Spring Data module for Apache Solr.


Community modules

Spring Data Aerospike - Spring Data module for Aerospike.

Spring Data ArangoDB - Spring Data module for ArangoDB.

Spring Data Couchbase - Spring Data module for Couchbase.

Spring Data Azure DocumentDB - Spring Data module for Microsoft Azure DocumentDB.

Spring Data DynamoDB - Spring Data module for DynamoDB.

Spring Data Elasticsearch - Spring Data module for Elasticsearch.

Spring Data Hazelcast - Provides Spring Data repository support for Hazelcast.

Spring Data Jest - Spring Data for Elasticsearch based on the Jest REST client.

Spring Data Neo4j - Spring based, object-graph support and repositories for Neo4j.

Spring Data Spanner - Google Spanner support via Spring Cloud GCP.

Spring Data Vault - Vault repositories built on top of Spring Data KeyValue.


Related modules

Spring Data JDBC Extensions - Provides extensions to the JDBC support provided in the Spring Framework.

Spring for Apache Hadoop - Simplifies Apache Hadoop by providing a unified configuration model and easy to use APIs for using HDFS, MapReduce, Pig, and Hive.

Spring Content - Associate content with your Spring Data Entities and store it in a number of different stores including the File-system, S3, Database or Mongo’s GridFS.


Release train

Spring Data is an umbrella project consisting of independent projects with, in principle, different release cadences. To manage the portfolio, a BOM (Bill of Materials - see this example) is published with a curated set of dependencies on the individual project. The release trains have names, not versions, to avoid confusion with the sub-projects.


The names are an alphabetic sequence (so you can sort them chronologically) with names of famous computer scientists and software developers. When point releases of the individual projects accumulate to a critical mass, or if there is a critical bug in one of them that needs to be available to everyone, the release train will push out “service releases” with names ending “-SRX”, where “X” is a number.


Currently the release train contains the following modules:


Spring Data Commons

Spring Data JPA

Spring Data KeyValue

Spring Data LDAP

Spring Data MongoDB

Spring Data Gemfire

Spring Data for Apache Geode

Spring Data REST

Spring Data Redis

Spring Data for Apache Cassandra

Spring Data for Apache Solr

Spring Data Couchbase (community module)

Spring Data Elasticsearch (community module)

Spring Data Neo4j (community module)



Spring Integration

https://projects.spring.io/spring-integration/

Extends the Spring programming model to support the well-known Enterprise Integration Patterns. Spring Integration enables lightweight messaging within Spring-based applications and supports integration with external systems via declarative adapters. Those adapters provide a higher-level of abstraction over Spring's support for remoting, messaging, and scheduling. Spring Integration's primary goal is to provide a simple model for building enterprise integration solutions while maintaining the separation of concerns that is essential for producing maintainable, testable code.


Using the Spring Framework encourages developers to code using interfaces and use dependency injection (DI) to provide a Plain Old Java Object (POJO) with the dependencies it needs to perform its tasks. Spring Integration takes this concept one step further, where POJOs are wired together using a messaging paradigm and individual components may not be aware of other components in the application. Such an application is built by assembling fine-grained reusable components to form a higher level of functionality. WIth careful design, these flows can be modularized and also reused at an even higher level.


In addition to wiring together fine-grained components, Spring Integration provides a wide selection of channel adapters and gateways to communicate with external systems. Channel Adapters are used for one-way integration (send or receive); gateways are used for request/reply scenarios (inbound or outbound). For a full list of adapters and gateways, refer to the reference documentation.


The Spring Cloud Stream project builds on Spring Integration, where Spring Integration is used as an engine for message-driven microservices.


Features

Implementation of most of the Enterprise Integration Patterns

Endpoint

Channel (Point-to-point and Publish/Subscribe)

Aggregator

Filter

Transformer

Control Bus

Integration with External Systems

ReST/HTTP

FTP/SFTP

Twitter

WebServices (SOAP and ReST)

TCP/UDP

JMS

RabbitMQ

Email

The framework has extensive JMX support

Exposing framework components as MBeans

Adapters to obtain attributes from MBeans, invoke operations, send/receive notifications




Spring Batch

https://projects.spring.io/spring-batch/

A lightweight, comprehensive batch framework designed to enable the development of robust batch applications vital for the daily operations of enterprise systems.



Spring Batch provides reusable functions that are essential in processing large volumes of records, including logging/tracing, transaction management, job processing statistics, job restart, skip, and resource management. It also provides more advanced technical services and features that will enable extremely high-volume and high performance batch jobs through optimization and partitioning techniques. Simple as well as complex, high-volume batch jobs can leverage the framework in a highly scalable manner to process significant volumes of information.


Features

Transaction management

Chunk based processing

Declarative I/O

Start/Stop/Restart

Retry/Skip

Web based administration interface (Spring Cloud Data Flow)



Spring Security

https://projects.spring.io/spring-security/

Spring Security is a powerful and highly customizable authentication and access-control framework. It is the de-facto standard for securing Spring-based applications.


Spring Security is a framework that focuses on providing both authentication and authorization to Java applications. Like all Spring projects, the real power of Spring Security is found in how easily it can be extended to meet custom requirements


Features

Comprehensive and extensible support for both Authentication and Authorization

Protection against attacks like session fixation, clickjacking, cross site request forgery, etc

Servlet API integration

Optional integration with Spring Web MVC

Much more…



Spring Hateoas

https://projects.spring.io/spring-hateoas/

Create REST representations that follow the HATEOAS principle from your Spring-based applications.


Spring HATEOAS provides some APIs to ease creating REST representations that follow the HATEOAS principle when working with Spring and especially Spring MVC. The core problem it tries to address is link creation and representation assembly.


Features

Model classes for link, resource representation models

Link builder API to create links pointing to Spring MVC controller methods

Support for hypermedia formats like HAL



Spring Rest Docs

https://projects.spring.io/spring-restdocs/

Document RESTful services by combining hand-written documentation with auto-generated snippets produced with Spring MVC Test.


Spring REST Docs helps you to document RESTful services. It combines hand-written documentation written with Asciidoctor and auto-generated snippets produced with Spring MVC Test. This approach frees you from the limitations of the documentation produced by tools like Swagger. It helps you to produce documentation that is accurate, concise, and well-structured. This documentation then allows your users to get the information they need with a minimum of fuss.




Spring Social

https://projects.spring.io/spring-social/

Connect your Spring application with Software-as-a-Service (SaaS) API providers such as Facebook, Twitter, and LinkedIn.


SPRING SOCIAL CORE

A framework for creating connections between a Spring application and an API. Also includes support for "Sign in With [Provider]" for authenticating via an API provider.

 

SPRING SOCIAL FACEBOOK

A provider extension for Spring Social to enable connectivity with Facebook and an API binding for Facebook's Graph API.

 

SPRING SOCIAL TWITTER

A provider extension for Spring Social to enable connectivity with Twitter and an API binding for Twitter's REST API.

 

SPRING SOCIAL LINKEDIN

A provider extension for Spring Social to enable connectivity with LinkedIn and an API binding for LinkedIn's REST API.



Spring Amqp

https://projects.spring.io/spring-amqp/

The Spring AMQP project applies core Spring concepts to the development of AMQP-based messaging solutions. It provides a "template" as a high-level abstraction for sending and receiving messages. It also provides support for Message-driven POJOs with a "listener container". These libraries facilitate management of AMQP resources while promoting the use of dependency injection and declarative configuration. In all of these cases, you will see similarities to the JMS support in the Spring Framework.



The project consists of two parts; spring-amqp is the base abstraction, and spring-rabbit is the RabbitMQ implementation.


Features

Listener container for asynchronous processing of inbound messages

RabbitTemplate for sending and receiving messages

RabbitAdmin for automatically declaring queues, exchanges and bindings




Spring Mobile

https://projects.spring.io/spring-mobile/

Spring Mobile is an extension to Spring MVC that aims to simplify the development of mobile web applications.


Spring Mobile is a framework that provides capabilities to detect the type of device making a request to your Spring web site and serve alternative views based on that device. Like all Spring projects, the real power of Spring Mobile is found in how easily it can be extended.


Features

A Device resolver abstraction for server-side detection of mobile and tablet devices

Site preference management that allows the user to indicate if he or she prefers a "normal", "mobile", or "tablet" experience

A site switcher capable of switching the user to the most appropriate site, either mobile, tablet, or normal, based on his or her device and optionally indicated site preference

Device aware view management for organizing and managing different views for specific devices



Spring For Android

https://projects.spring.io/spring-android/

Spring for Android is an extension of the Spring Framework that aims to simplify the development of native Android applications.


Spring for Android is a framework that is designed to provide components of the Spring family of projects for use in Android apps. Like all Spring projects, the real power of Spring for Android is found in how easily it can be extended.


Features

A Rest Client for Android

Auth support for accessing secure APIs



Spring Web Flow

https://projects.spring.io/spring-webflow/

Spring Web Flow builds on Spring MVC and allows implementing the "flows" of a web application. A flow encapsulates a sequence of steps that guide a user through the execution of some business task. It spans multiple HTTP requests, has state, deals with transactional data, is reusable, and may be dynamic and long-running in nature..



The sweet spot for Spring Web Flow are stateful web applications with controlled navigation such as checking in for a flight, applying for a loan, shopping cart checkout, or even adding a confirmation step to a form. What these scenarios have in common is one or more of the following traits:


There is a clear start and an end point.

The user must go through a set of screens in a specific order.

The changes are not finalized until the last step.

Once complete it shouldn't be possible to repeat a transaction accidentally



Spring Web Services

https://projects.spring.io/spring-ws/

Spring Web Services aims to facilitate contract-first SOAP service development, allowing for the creation of flexible web services using one of the many ways to manipulate XML payloads.


Spring Web Services (Spring-WS) is a product of the Spring community focused on creating document-driven Web services. Spring Web Services aims to facilitate contract-first SOAP service development, allowing for the creation of flexible web services using one of the many ways to manipulate XML payloads. The product is based on Spring itself, which means you can use the Spring concepts such as dependency injection as an integral part of your Web service.


People use Spring-WS for many reasons, but most are drawn to it after finding alternative SOAP stacks lacking when it comes to following Web service best practices. Spring-WS makes the best practice an easy practice. This includes practices such as the WS-I basic profile, Contract-First development, and having a loose coupling between contract and implementation. The other key features of Spring Web services are:


Features

Makes the Best Practice an Easy Practice: Spring Web Services makes enforcing best practices easier. This includes practices such as the WS-I basic profile, Contract-First development, and having a loose coupling between contract and implementation.

Powerful mappings: You can distribute incoming XML request to any object, depending on message payload, SOAP Action header, or an XPath expression.

XML API support: Incoming XML messages can be handled in standard JAXP APIs such as DOM, SAX, and StAX, but also JDOM, dom4j, XOM, or even marshalling technologies.

Flexible XML Marshalling: The Object/XML Mapping module in the Spring Web Services distribution supports JAXB 1 and 2, Castor, XMLBeans, JiBX, and XStream. And because it is a separate module, you can use it in non-Web services code as well.

Reuses your Spring expertise: Spring-WS uses Spring application contexts for all configuration, which should help Spring developers get up-to-speed nice and quickly. Also, the architecture of Spring-WS resembles that of Spring-MVC.

Supports WS-Security: WS-Security allows you to sign SOAP messages, encrypt and decrypt them, or authenticate against them.

Integrates with Acegi Security: The WS-Security implementation of Spring Web Services provides integration with Spring Security. This means you can use your existing configuration for your SOAP service as well.

Built by Maven: This assists you in effectively reusing the Spring Web Services artifacts in your own Maven-based projects.

Apache license. You can confidently use Spring-WS in your project.



Spring Ldap

https://projects.spring.io/spring-ldap/

Makes it easier to build Spring-based applications that use the Lightweight Directory Access Protocol.


Spring LDAP is a Java library for simplifying LDAP operations, based on the pattern of Spring's JdbcTemplate. The framework relieves the user of common chores, such as looking up and closing contexts, looping through results, encoding/decoding values and filters, and more.


Features

Provides LDAP template which eliminates the need to worry about creating and closing LdapContext and looping through NamingEnumeration

Comprehensive unchecked Exception hierarchy built on Spring's DataAccessException

Contains classes for dynamically building LDAP filters and Distinguished Names (DNs)

Client-side LDAP transaction management

Much more…



Spring Session

https://projects.spring.io/spring-session/

Spring Session provides an API and implementations for managing a user’s session information.


Features

Spring Session makes it trivial to support clustered sessions without being tied to an application container specific solution. It also provides transparent integration with:


HttpSession - allows replacing the HttpSession in an application container (i.e. Tomcat) neutral way, with support for providing session IDs in headers to work with RESTful APIs

WebSocket - provides the ability to keep the HttpSession alive when receiving WebSocket messages

WebSession - allows replacing the Spring WebFlux’s WebSession in an application container neutral way

Modules

Spring Session consists of the following modules:


Spring Session Core - provides core Spring Session functionalities and APIs

Spring Session Data Redis - provides SessionRepository and ReactiveSessionRepository implementation backed by Redis and configuration support

Spring Session JDBC - provides SessionRepository implementation backed by a relational database and configuration support

Spring Session Hazelcast - provides SessionRepository implementation backed by Hazelcast and configuration support




Spring Shell

https://projects.spring.io/spring-shell/

The Spring Shell project provides an interactive shell that allows you to plugin your own custom commands using a Spring based programming model.


Users of the Spring Shell project can easily build a full featured shell ( aka command line) application by depending on the Spring Shell jars and adding their own commands (which come as methods on spring beans). Creating a command line application can be useful e.g. to interact with your project's REST API, or to work with local file content.


Features

Spring Shell's features include


A simple, annotation driven, programming model to contribute custom commands

Use of Spring Boot auto-configuration functionality as the basis for a command plugin strategy

Tab completion, colorization, and script execution

Customization of command prompt, shell history file name, handling of results and errors

Dynamic enablement of commands based on domain specific criteria

Integration with the bean validation API

Already built-in commands, such as clear screen, gorgeous help, exit

ASCII art Tables, with formatting, alignment, fancy borders, etc.



Spring Flo

https://projects.spring.io/spring-flo/

Spring Flo is a JavaScript library that offers a basic embeddable HTML5 visual builder for pipelines and simple graphs. This library is used as the basis of the stream builder in Spring Cloud Data Flow.


Flo includes all the basic elements of an integration-flow designer such as connectors, control nodes, palettes, state transitions, and graph topologies—importantly, there is a textual shell, DSL support, and a graphical canvas designed for creating and reviewing comprehensive workflows.


Features

Create, manage, and monitor stream pipelines using DSL, a graphical canvas, or both

Write pipelines via DSL with content-assist and auto-complete

See a visual representation of modules across a distributed deployment

Use auto-adjustment and grid-layout capabilities in the GUI for easier and more efficient organization of pipelines



Spring Kafka

https://projects.spring.io/spring-kafka/

The Spring for Apache Kafka (spring-kafka) project applies core Spring concepts to the development of Kafka-based messaging solutions. It provides a "template" as a high-level abstraction for sending messages. It also provides support for Message-driven POJOs with @KafkaListener annotations and a "listener container". These libraries promote the use of dependency injection and declarative. In all of these cases, you will see similarities to the JMS support in the Spring Framework and RabbitMQ support in Spring AMQP.


Features

KafkaTemplate

KafkaMessageListenerContainer

@KafkaListener

KafkaTransactionManager

spring-kafka-test jar with embedded kafka server




Spring Statemachine

https://projects.spring.io/spring-statemachine/

Spring Statemachine is a framework for application developers to use state machine concepts with Spring applications.

Spring Statemachine aims to provide following features:

Easy to use flat one level state machine for simple use cases.
Hierarchical state machine structure to ease complex state configuration.
State machine regions to provide even more complex state configurations.
Usage of triggers, transitions, guards and actions.
Type safe configuration adapter.
Builder pattern for easy instantiation for use outside of Spring Application context
Recipes for usual use cases
Distributed state machine based on a Zookeeper
State machine event listeners.
UML Eclipse Papyrus modeling.
Store machine config in a persistent storage.
Spring IOC integration to associate beans with a state machine.
State machines are powerful because behaviour is always guaranteed to be consistent, making it relatively easy to debug. This is because operational rules are written in stone when the machine is started. The idea is that your application may exist in a finite number of states and certain predefined triggers can take your application from one state to the next. Such triggers can be based on either events or timers.

It is much easier to define high level logic outside of your application and then rely on the state machine to manage state. You can interact with the state machine by sending an event, listening for changes or simply request a current state.

Do you want to ask a question? Go to StackOverflow and use a tag spring-statemachine or Gitter.


앱)logback.xml sample example

Posted by HULIA(휴리아)
2018. 4. 4. 15:39 백엔드개발/자바스프링
<?xml version"1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="1 seconds">

<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
         <encoder>
                <pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}][%-5level][%logger] %msg%n</pattern>
         </endcoder>
</appender>

<appender name="SERVICE_LOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
      <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
                <fileNamePattern>c:/test/service.log.%d{yyyy-MM-dd-HH}</fileNamePattern>
      </rollingPolicy>
      <encoder>
                 <pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}][%-5level][%logger] %msg%n</pattern>
      </encoder>
</appender>

<root level="INFO">
         <appender-ref ref="CONSOLE" />
         <appender-ref ref="SERVICE_LOG" />
</root>


<logger name="SERVICE_LOGGER" level="DEBUG" additivity="false">
       <appender-ref ref="CONSOLE" />
</logger>

</configuration>

스프링부트 swagger ui 추가하기 API 설명 웹 페이지

Posted by HULIA(휴리아)
2018. 4. 3. 22:55 백엔드개발/스프링부트

pom.xml

<dependency>

<groupId>io.springfox</groupId>

<artifactId>springfox-swagger2</artifactId>

<version>2.3.1</version>

</dependency>

<dependency>

<groupId>io.springfox</groupId>

<artifactId>springfox-swagger-ui</artifactId>

<version>2.3.1</version>

</dependency>



springbootapliction main 클래스에

@SpringBootApplication

@EnableSwagger2


추가해주면 자동으로 API관련된 정보들이 들어가게 됨




@Configuration

@EnableSwagger2

public class SwaggerConfig {

@Bean

public UiConfiguration uiConfig() {

return UiConfiguration.DEFAULT;

}

private ApiInfo metadata() {

return new ApiInfoBuilder()

.title("Spring Boot")

.description("Spring boot REST API")

.version("1.0")

.build();

}

@Bean

public Docket api() {

return new Docket(DocumentationType.SWAGGER_2)

.select()

.apis(RequestHandlerSelectors.any())

//.paths(PathSelectors.any()) // 모든 클래스 다 나옴

.paths(PathSelectors.ant("/test/**")) // 정해진 클래스만 나오도록 함

.build()

.apiInfo(metadata());

}

}

이렇게 Config클래스를 만들어서 설정해주면 세팅이 됨~



추가적으로 swagger 페이지에 API문서를 더 자세히 적고 싶으면 어노테이션을 추가로 하면 작성하면 된답니다

https://github.com/swagger-api/swagger-core/wiki/Annotations



사용예시사이트

https://steemit.com/kr-dev/@igna84/spring-boot-web-swagger

https://jojoldu.tistory.com/31


앱)loggerutil 자바 common 코드

Posted by HULIA(휴리아)
2018. 4. 2. 15:29 백엔드개발/자바스프링
public static void debug(Logger logger, String msg, Object... params){
      if(logger.isDebugEnabled()){
            logger.debug(msg.params);
       }
}

public static void info(Logger logger, String msg, Object... params){
       if(logger.isInfoEnabled()){
             logger.info(msg, params);
       }
}

//같은 방식으로 warn, error추가하면 됨