標籤:sso hadoop 一個 on() hbase comm 操作 version ==
實現原理:
1、讀取hbase資料每頁的資料時多取一條資料。如:分頁是10條一頁,第一次查詢hbase時, 取10+1條資料,然後把第一條和最後一條rowkey資料儲存在redis中,redis中的key為使用者的token+URL。即token.set(token+url:list<String>);
2、前台點擊下頁時,查詢當前頁(currentPagae)在redis的list是否存在list.get(currentPage)的rowkey。如果存在,則以之前為startRowKey,取10+1條,並把最後一條儲存在redis中。不存在則查詢出錯,提示重新查詢,理論上不會出現,除非redis掛了。
3、如果查詢的資料的startRowKey和stopRowKey在token中都能找到。則只需要查詢這個範圍資料即可。
3、什麼時候清除這些redis資料呢?第一、設定redis有效期間。第二,這條很重要,是保證前三條資料準確性的前提。在使用者點擊非下一頁上一頁按鈕的操作時,都清除redis中的目前使用者的資料。
4、即然有分頁,那當然有資料量統計count,這個我使用hbase的副處理器coprocessor。當然每次查詢count也費時間。當第一次查詢時,把count儲存在使用者的redis中。redis的清除還是第(3)步的過程。
5、能這麼做還有一個很重要的前提:前端介面的分頁,只提供了兩個按鈕:上一頁和下一頁。這是保證這個方案可行性的基礎。
下面上代碼:
controller中方法的代碼,這裡的基本架構使用的是renren快速開發套件
@RequestMapping("/list")public R list(@RequestParam Map<String, Object> params,HttpServletRequest httpRequest){long time = System.currentTimeMillis();HBasePage page= null;try{String token = httpRequest.getHeader("token");//如需要額外的過濾器,請自己定義並使用.buildFilter(filter)方法添加到query中HbaseQuery query = new HbaseQuery(params).buildScanCount().buildPageRowKey(token).finish();page = service.query(query).buildRedisRowKey(token);}catch (Exception e){e.printStackTrace();}long time2 = System.currentTimeMillis();System.out.println("time2-time==list="+(time2-time));return R.ok().put("page",page);}
處理查詢參數的類HBaseQuery,因為業務原因,所有的hbase查詢有兩個必須的條件:開始日期和結束日期,所以我在HBaseQuery中把這兩個參數直接封裝了。
public class HbaseQuery {private static final long serialVersionUID = 1L;//當前頁碼 private int page; //每頁條數 private long limit; private Map<String, Object> params; private List<String> pageStartRowKeys; private Date startDate; private Date endDate; private Scan countScan; private Scan dataScan= new Scan(); private int cache = 10; private DateFormat sf =new SimpleDateFormat("yyyy-MM-dd"); private FilterList fl = new FilterList(FilterList.Operator.MUST_PASS_ALL); private FilterList countfl = new FilterList(FilterList.Operator.MUST_PASS_ALL); public HbaseQuery(Map<String, Object> params)throws Exception{ this.params = params; String temp2 = (String)params.get("startDate"); startDate = sf.parse(temp2); String temp = (String)params.get("endDate"); endDate = sf.parse(temp); endDate = DateUtils.addDays(endDate,1); this.page = Integer.parseInt(params.get("page").toString()); this.limit = Integer.parseInt(params.get("limit").toString()); cache = limit>5000?5000:((int)limit+1);//加1,因為每次都會取limit+1條資料 params.remove("startDate"); params.remove("endDate"); params.remove("page"); params.remove("limit"); } public HbaseQuery buildScanCount()throws Exception{ countScan= new Scan(); countScan.setMaxVersions(); countScan.setCaching(5); Long startLong = Long.MAX_VALUE-startDate.getTime(); countScan.setStopRow((startLong+"-").getBytes()); Long endLong = Long.MAX_VALUE-(endDate.getTime()-1); countScan.setStartRow((endLong+"-").getBytes()); return this; } public HbaseQuery buildPageRowKey(String token)throws Exception{ dataScan.setMaxVersions(); Long startLong = Long.MAX_VALUE-startDate.getTime(); dataScan.setStopRow((startLong+"-").getBytes()); Long endLong = Long.MAX_VALUE-(endDate.getTime()-1); dataScan.setStartRow((endLong+"-").getBytes()); RedisUtils redisUtils = (RedisUtils)SpringContextUtils.getBean("redisUtils"); List<String> pageStartRowKeys = redisUtils.get(token,List.class); //點擊上一頁或下一頁 if(params.get("pageicon")!=null&&!((String)params.get("pageicon")).equals("")){ //且redis中的startRowKeys不為空白 String pageicon = (String)params.get("pageicon"); if(pageStartRowKeys!=null){ String startRowKey = pageStartRowKeys.get(this.page-1); if(pageicon.equals("next")&&pageStartRowKeys.size()==this.page){ dataScan.setStartRow(startRowKey.getBytes()); Filter pageFilter=new PageFilter(cache); fl.addFilter(pageFilter); }else if((pageicon.equals("next")&&pageStartRowKeys.size()>this.page) ||pageicon.equals("prev")){ String stopRowKey = pageStartRowKeys.get(this.page); dataScan.setStartRow(startRowKey.getBytes()); dataScan.setStopRow(stopRowKey.getBytes()); Filter pageFilter=new PageFilter(this.getLimit()); fl.addFilter(pageFilter); } }else{ throw new Exception("點的是分頁,但是redis中沒有資料,這程式肯定有問題"); } }else{//點擊的非分頁按鈕,則刪除redis中分頁資訊 redisUtils.delete(token); Filter pageFilter=new PageFilter(this.getLimit()+1); fl.addFilter(pageFilter); } dataScan.setCaching(cache); return this; } public HbaseQuery buildDataFilter(Filter filter){ fl.addFilter(filter); return this; } public HbaseQuery buildCountFilter(Filter filter){ fl.addFilter(filter); return this; } public HbaseQuery finish(){ countScan.setFilter(countfl); dataScan.setFilter(fl); return this; }}
查詢hbase的方法。注意:在使用hbase的副處理器前,請先確保表開通了此功能。
hbase表開通協處理功能方法(shell命令):
(1)disable指定表。hbase> disable ‘mytable‘
(2)添加aggregation hbase> alter ‘mytable‘, METHOD => ‘table_att‘,‘coprocessor‘=>‘|org.apache.hadoop.hbase.coprocessor.AggregateImplementation||‘
(3)重啟指定表 hbase> enable ‘mytable‘
public HBasePage query(HbaseQuery query) { long time = System.currentTimeMillis(); Map<String,String> communtiyKeysMap = new HashMap<>(); HBasePage page = new HBasePage(query.getLimit(),query.getPage()); final String tableName = this.getTableName(); if(query.getCountScan()!=null){ AggregationClient ac = new AggregationClient(hbaseTemplate.getConfiguration()); try{ long count = ac.rowCount(TableName.valueOf(tableName),new LongColumnInterpreter(),query.getCountScan()); page.setTotalCount(count); }catch (Throwable e){ e.printStackTrace(); } } long time2 = System.currentTimeMillis(); List rows = hbaseTemplate.find(tableName, query.getDataScan(), new RowMapper<Object>() { @Override public Object mapRow(Result result, int i) throws Exception { Class clazz = ReflectMap.get(tableName);//這裡做了表名和實體Bean的映射。 if(i==0){ communtiyKeysMap.put("curPageStart", new String(result.getRow())); } if(i==query.getLimit()){ communtiyKeysMap.put("nextPageStart", new String(result.getRow())); } HBaseResultBuilder hrb = new HBaseResultBuilder<Object>("sf", result, clazz); return hrb.buildAll().fetch(); } }); // if(rows.size()>0&&page.getPageSize()<rows.size()){ rows.remove(rows.size()-1); } page.setList(rows); page.setNextPageRow(communtiyKeysMap.get("nextPageStart")); page.setCurPageRow(communtiyKeysMap.get("curPageStart")); long time3 = System.currentTimeMillis(); System.out.println("time2-time==getCount="+(time2-time)); System.out.println("time3-time2==getData="+(time3-time2)); return page; }
/分頁類的代碼HbasePage
public class HBasePage implements Serializable {private static final long serialVersionUID = 1L;//總記錄數protected long totalCount;//每頁記錄數protected long pageSize;//總頁數protected int totalPage;//當前頁數protected int currPage;//列表資料protected List<Object> list;private String nextPageRow;//下一頁的ROWKEYprivate String curPageRow;//當前頁的開始ROWKEY/** * 分頁 * @param pageSize 每頁記錄數 * @param currPage 當前頁數 */public HBasePage(long pageSize, int currPage) {this.list = list;this.totalCount = totalCount;this.pageSize = pageSize;this.currPage = currPage;}public void setTotalCount(long totalCount) {this.totalCount = totalCount;this.totalPage = (int)Math.ceil((double)totalCount/pageSize);}public HBasePage buildRedisRowKey(String token){RedisUtils redisUtils = (RedisUtils)SpringContextUtils.getBean("redisUtils");List<String> pageStartRowKeys = redisUtils.get(token,List.class);List<String> pageRowKeys = redisUtils.get(token,List.class);if(this.getList().size()>0){if(pageRowKeys==null||pageRowKeys.size()<=0){pageRowKeys = new ArrayList<>();pageRowKeys.add(this.getCurPageRow().substring(0,this.getCurPageRow().indexOf("-")+1));pageRowKeys.add(this.getNextPageRow().substring(0,this.getNextPageRow().indexOf("-")+1));redisUtils.set(token,pageRowKeys);}else{if(pageRowKeys.size()>this.getCurrPage()){//doNothing}else if(pageRowKeys.size()==this.getCurrPage()){pageRowKeys.add(this.getNextPageRow().substring(0,this.getNextPageRow().indexOf("-")+1));redisUtils.set(token,pageRowKeys);}}}return this;}}
注意:
1、我的rowKey設定規則是 (Long_Max-new Date().getTime()+"-"+id),所以在看startRowKey和stopRowKey時特別注意。
如有什麼更好的辦法或代碼缺陷,歡迎留言探討。
hbase+springboot+redis實現分頁