0%

API 30

线程池可以缓存一定数量的线程。重用线程池中的线程,可以避免创建和销毁线程的开销。

能有效控制最大线程数。

能对线程进行简单的管理,如定时执行或指定间隔循环执行。

Java 线程池接口是 Executor,实现是 ThreadPoolExecutor。

ThreadPoolExecutor 源码注释

An ExecutorService that executes each submitted task using one of possibly several pooled threads, normally configured using Executors factory methods.

Thread pools address two different problems: they usually provide improved performance when executing large numbers of asynchronous tasks, due to reduced per-task invocation overhead, and they provide a means of bounding and managing the resources, including threads, consumed when executing a collection of tasks. Each ThreadPoolExecutor also maintains some basic statistics, such as the number of completed tasks.

To be useful across a wide range of contexts, this class provides many adjustable parameters and extensibility hooks. However, programmers are urged to use the more convenient Executors factory methods Executors.newCachedThreadPool() (unbounded thread pool, with automatic thread reclamation), Executors.newFixedThreadPool(int) (fixed size thread pool) and Executors.newSingleThreadExecutor() (single background thread), that preconfigure settings for the most common usage scenarios. Otherwise, use the following guide when manually configuring and tuning this class:

  • Core and maximum pool sizes

    A ThreadPoolExecutor will automatically adjust the pool size (see getPoolSize()) according to the bounds set by corePoolSize (see getCorePoolSize()) and maximumPoolSize (see getMaximumPoolSize()). When a new task is submitted in method execute(Runnable), and fewer than corePoolSize threads are running, a new thread is created to handle the request, even if other worker threads are idle. If there are more than corePoolSize but less than maximumPoolSize threads running, a new thread will be created only if the queue is full. By setting corePoolSize and maximumPoolSize the same, you create a fixed-size thread pool. By setting maximumPoolSize to an essentially unbounded value such as Integer.MAX_VALUE, you allow the pool to accommodate an arbitrary number of concurrent tasks. Most typically, core and maximum pool sizes are set only upon construction, but they may also be changed dynamically using setCorePoolSize(int) and setMaximumPoolSize(int).

  • On-demand construction

    By default, even core threads are initially created and started only when new tasks arrive, but this can be overridden dynamically using method prestartCoreThread() or prestartAllCoreThreads(). You probably want to prestart threads if you construct the pool with a non-empty queue.

  • Creating new threads

    New threads are created using a ThreadFactory. If not otherwise specified, a Executors.defaultThreadFactory() is used, that creates threads to all be in the same ThreadGroup and with the same NORM_PRIORITY priority and non-daemon status. By supplying a different ThreadFactory, you can alter the thread’s name, thread group, priority, daemon status, etc. If a ThreadFactory fails to create a thread when asked by returning null from newThread, the executor will continue, but might not be able to execute any tasks. Threads should possess the “modifyThread” RuntimePermission. If worker threads or other threads using the pool do not possess this permission, service may be degraded: configuration changes may not take effect in a timely manner, and a shutdown pool may remain in a state in which termination is possible but not completed.

  • Keep-alive times

    If the pool currently has more than corePoolSize threads, excess threads will be terminated if they have been idle for more than the keepAliveTime (see getKeepAliveTime(TimeUnit)). This provides a means of reducing resource consumption when the pool is not being actively used. If the pool becomes more active later, new threads will be constructed. This parameter can also be changed dynamically using method setKeepAliveTime(long, TimeUnit). Using a value of Long.MAX_VALUE TimeUnit.NANOSECONDS effectively disables idle threads from ever terminating prior to shut down. By default, the keep-alive policy applies only when there are more than corePoolSize threads. But method allowCoreThreadTimeOut(boolean) can be used to apply this time-out policy to core threads as well, so long as the keepAliveTime value is non-zero.

  • Queuing

    Any BlockingQueue may be used to transfer and hold submitted tasks. The use of this queue interacts with pool sizing: If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing. If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread. If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected. There are three general strategies for queuing: Direct handoffs. A good default choice for a work queue is a SynchronousQueue that hands off tasks to threads without otherwise holding them. Here, an attempt to queue a task will fail if no threads are immediately available to run it, so a new thread will be constructed. This policy avoids lockups when handling sets of requests that might have internal dependencies. Direct handoffs generally require unbounded maximumPoolSizes to avoid rejection of new submitted tasks. This in turn admits the possibility of unbounded thread growth when commands continue to arrive on average faster than they can be processed. Unbounded queues. Using an unbounded queue (for example a LinkedBlockingQueue without a predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. Thus, no more than corePoolSize threads will ever be created. (And the value of the maximumPoolSize therefore doesn’t have any effect.) This may be appropriate when each task is completely independent of others, so tasks cannot affect each others execution; for example, in a web page server. While this style of queuing can be useful in smoothing out transient bursts of requests, it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed. Bounded queues. A bounded queue (for example, an ArrayBlockingQueue) helps prevent resource exhaustion when used with finite maximumPoolSizes, but can be more difficult to tune and control. Queue sizes and maximum pool sizes may be traded off for each other: Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. If tasks frequently block (for example if they are I/O bound), a system may be able to schedule time for more threads than you otherwise allow. Use of small queues generally requires larger pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling overhead, which also decreases throughput.

  • Rejected tasks

    New tasks submitted in method execute(Runnable) will be rejected when the Executor has been shut down, and also when the Executor uses finite bounds for both maximum threads and work queue capacity, and is saturated. In either case, the execute method invokes the RejectedExecutionHandler.rejectedExecution(Runnable, ThreadPoolExecutor) method of its RejectedExecutionHandler. Four predefined handler policies are provided: In the default ThreadPoolExecutor.AbortPolicy, the handler throws a runtime RejectedExecutionException upon rejection. In ThreadPoolExecutor.CallerRunsPolicy, the thread that invokes execute itself runs the task. This provides a simple feedback control mechanism that will slow down the rate that new tasks are submitted. In ThreadPoolExecutor.DiscardPolicy, a task that cannot be executed is simply dropped. In ThreadPoolExecutor.DiscardOldestPolicy, if the executor is not shut down, the task at the head of the work queue is dropped, and then execution is retried (which can fail again, causing this to be repeated.) It is possible to define and use other kinds of RejectedExecutionHandler classes. Doing so requires some care especially when policies are designed to work only under particular capacity or queuing policies.

  • Hook methods

    This class provides protected overridable beforeExecute(Thread, Runnable) and afterExecute(Runnable, Throwable) methods that are called before and after execution of each task. These can be used to manipulate the execution environment; for example, reinitializing ThreadLocals, gathering statistics, or adding log entries. Additionally, method terminated() can be overridden to perform any special processing that needs to be done once the Executor has fully terminated. If hook or callback methods throw exceptions, internal worker threads may in turn fail and abruptly terminate.

  • Queue maintenance

    Method getQueue() allows access to the work queue for purposes of monitoring and debugging. Use of this method for any other purpose is strongly discouraged. Two supplied methods, remove(Runnable) and purge() are available to assist in storage reclamation when large numbers of queued tasks become cancelled.

  • Finalization

    A pool that is no longer referenced in a program AND has no remaining threads will be shutdown automatically. If you would like to ensure that unreferenced pools are reclaimed even if users forget to call shutdown(), then you must arrange that unused threads eventually die, by setting appropriate keep-alive times, using a lower bound of zero core threads and/or setting allowCoreThreadTimeOut(boolean).

Extension example. Most extensions of this class override one or more of the protected hook methods. For example, here is a subclass that adds a simple pause/resume feature:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
class PausableThreadPoolExecutor extends ThreadPoolExecutor {
private boolean isPaused;
private ReentrantLock pauseLock = new ReentrantLock();
private Condition unpaused = pauseLock.newCondition();

public PausableThreadPoolExecutor(...) { super(...); }

protected void beforeExecute(Thread t, Runnable r) {
super.beforeExecute(t, r);
pauseLock.lock();
try {
while (isPaused) unpaused.await();
} catch (InterruptedException ie) {
t.interrupt();
} finally {
pauseLock.unlock();
}
}

public void pause() {
pauseLock.lock();
try {
isPaused = true;
} finally {
pauseLock.unlock();
}
}

public void resume() {
pauseLock.lock();
try {
isPaused = false;
unpaused.signalAll();
} finally {
pauseLock.unlock();
}
}
}

构造方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
/**
* Creates a new {@code ThreadPoolExecutor} with the given initial
* parameters.
*
* @param corePoolSize the number of threads to keep in the pool, even
* if they are idle, unless {@code allowCoreThreadTimeOut} is set
* @param maximumPoolSize the maximum number of threads to allow in the
* pool
* @param keepAliveTime when the number of threads is greater than
* the core, this is the maximum time that excess idle threads
* will wait for new tasks before terminating.
* @param unit the time unit for the {@code keepAliveTime} argument
* @param workQueue the queue to use for holding tasks before they are
* executed. This queue will hold only the {@code Runnable}
* tasks submitted by the {@code execute} method.
* @param threadFactory the factory to use when the executor
* creates a new thread
* @param handler the handler to use when execution is blocked
* because the thread bounds and queue capacities are reached
* @throws IllegalArgumentException if one of the following holds:<br>
* {@code corePoolSize < 0}<br>
* {@code keepAliveTime < 0}<br>
* {@code maximumPoolSize <= 0}<br>
* {@code maximumPoolSize < corePoolSize}
* @throws NullPointerException if {@code workQueue}
* or {@code threadFactory} or {@code handler} is null
*/
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler) {

corePoolSize:核心线程数。

maximumPoolSize:线程池允许创建的最大线程数。

keepAliveTime:非核心线程闲置的超时时间。

unit:时间单位。

workQueue:任务队列。

threadFactory:线程工厂。

handler:饱和策略。

线程池的处理流程

当提交任务时,如果线程数未超过 corePoolSize,将创建新线程执行任务。否则如果 workQueue 未满,将把任务入队。否则如果线程数量小于 maximumPoolSize,创建新线程执行任务。否则执行饱和策略。

ThreadPoolExecutor#execute

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should have added a thread
* (because existing ones died since last checking) or that
* the pool shut down since entry into this method. So we
* recheck state and if necessary roll back the enqueuing if
* stopped, or start a new thread if there are none.
*
* 3. If we cannot queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}

线程池中线程的创建

销毁

线程池的种类

通过设置不同的 ThreadPoolExecutor 参数可以构建不同种类的线程池,Executors 类中提供了创建各种线程池的方法。比较常用的 4 种:

FixedThreadPool

可重用固定线程数的线程池。

核心线程数是 nThreads,最大线程数是 nThreads。只有核心线程,不会创建非核心线程。任务队列无界。

LinkedBlockingQueue FIFO。

某个线程发生了未预期的 Exception 而结束,将补充一个新的线程。

能快速相应外界的请求。

1
2
3
4
5
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}

CachedThreadPool

根据需要创建线程。

核心线程数为 0。非核心线程数不限。超时 60 秒。SynchronousQueue 是不存储元素的阻塞队列,每一个插入操作必须等待一个移除操作,一个移除操作必须等待一个插入操作。

适合大量需要立即处理且耗时较少的线程。

1
2
3
4
5
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}

SingleThreadPool

核心线程和最大线程数都是 1。

确保所有任务按照顺序执行。

1
2
3
4
5
6
public static ExecutorService newSingleThreadExecutor() {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>()));
}

ScheduledThreadPool

能实现定时和周期性处理任务的线程池。

1
2
3
public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) {
return new ScheduledThreadPoolExecutor(corePoolSize);
}
1
2
3
4
5
public ScheduledThreadPoolExecutor(int corePoolSize) {
super(corePoolSize, Integer.MAX_VALUE,
DEFAULT_KEEPALIVE_MILLIS, MILLISECONDS,
new DelayedWorkQueue());
}

AsyncTask

源码注释

AsyncTask was intended to enable proper and easy use of the UI thread. However, the most common use case was for integrating into UI, and that would cause Context leaks, missed callbacks, or crashes on configuration changes. It also has inconsistent behavior on different versions of the platform, swallows exceptions from doInBackground, and does not provide much utility over using Executors directly.
AsyncTask is designed to be a helper class around Thread and Handler and does not constitute a generic threading framework. AsyncTasks should ideally be used for short operations (a few seconds at the most.) If you need to keep threads running for long periods of time, it is highly recommended you use the various APIs provided by the java.util.concurrent package such as Executor, ThreadPoolExecutor and FutureTask.
An asynchronous task is defined by a computation that runs on a background thread and whose result is published on the UI thread. An asynchronous task is defined by 3 generic types, called Params, Progress and Result, and 4 steps, called onPreExecute, doInBackground, onProgressUpdate and onPostExecute.
Developer Guides
For more information about using tasks and threads, read the Processes and Threads developer guide.
Usage
AsyncTask must be subclassed to be used. The subclass will override at least one method (doInBackground), and most often will override a second one (onPostExecute.)
Here is an example of subclassing:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
private class DownloadFilesTask extends AsyncTask<URL, Integer, Long> {
protected Long doInBackground(URL... urls) {
int count = urls.length;
long totalSize = 0;
for (int i = 0; i < count; i++) {
totalSize += Downloader.downloadFile(urls[i]);
publishProgress((int) ((i / (float) count) * 100));
// Escape early if cancel() is called
if (isCancelled()) break;
}
return totalSize;
}

protected void onProgressUpdate(Integer... progress) {
setProgressPercent(progress[0]);
}

protected void onPostExecute(Long result) {
showDialog("Downloaded " + result + " bytes");
}
}

Once created, a task is executed very simply:

1
new DownloadFilesTask().execute(url1, url2, url3);

AsyncTask’s generic types
The three types used by an asynchronous task are the following:
Params, the type of the parameters sent to the task upon execution.
Progress, the type of the progress units published during the background computation.
Result, the type of the result of the background computation.
Not all types are always used by an asynchronous task. To mark a type as unused, simply use the type Void:

1
private class MyTask extends AsyncTask<Void, Void, Void> { ... }

The 4 steps
When an asynchronous task is executed, the task goes through 4 steps:
onPreExecute(), invoked on the UI thread before the task is executed. This step is normally used to setup the task, for instance by showing a progress bar in the user interface.
doInBackground, invoked on the background thread immediately after onPreExecute() finishes executing. This step is used to perform background computation that can take a long time. The parameters of the asynchronous task are passed to this step. The result of the computation must be returned by this step and will be passed back to the last step. This step can also use publishProgress to publish one or more units of progress. These values are published on the UI thread, in the onProgressUpdate step.
onProgressUpdate, invoked on the UI thread after a call to publishProgress. The timing of the execution is undefined. This method is used to display any form of progress in the user interface while the background computation is still executing. For instance, it can be used to animate a progress bar or show logs in a text field.
onPostExecute, invoked on the UI thread after the background computation finishes. The result of the background computation is passed to this step as a parameter.
Cancelling a task
A task can be cancelled at any time by invoking cancel(boolean). Invoking this method will cause subsequent calls to isCancelled() to return true. After invoking this method, onCancelled(Object), instead of onPostExecute(Object) will be invoked after doInBackground(Object[]) returns. To ensure that a task is cancelled as quickly as possible, you should always check the return value of isCancelled() periodically from doInBackground(Object[]), if possible (inside a loop for instance.)
Threading rules
There are a few threading rules that must be followed for this class to work properly:
The AsyncTask class must be loaded on the UI thread. This is done automatically as of Build.VERSION_CODES.JELLY_BEAN.
The task instance must be created on the UI thread.
execute must be invoked on the UI thread.
Do not call onPreExecute(), onPostExecute, doInBackground, onProgressUpdate manually.
The task can be executed only once (an exception will be thrown if a second execution is attempted.)
Memory observability
AsyncTask guarantees that all callback calls are synchronized to ensure the following without explicit synchronizations.
The memory effects of onPreExecute, and anything else executed before the call to execute, including the construction of the AsyncTask object, are visible to doInBackground.
The memory effects of doInBackground are visible to onPostExecute.
Any memory effects of doInBackground preceding a call to publishProgress are visible to the corresponding onProgressUpdate call. (But doInBackground continues to run, and care needs to be taken that later updates in doInBackground do not interfere with an in-progress onProgressUpdate call.)
Any memory effects preceding a call to cancel are visible after a call to isCancelled that returns true as a result, or during and after a resulting call to onCancelled.
Order of execution
When first introduced, AsyncTasks were executed serially on a single background thread. Starting with Build.VERSION_CODES.DONUT, this was changed to a pool of threads allowing multiple tasks to operate in parallel. Starting with Build.VERSION_CODES.HONEYCOMB, tasks are executed on a single thread to avoid common application errors caused by parallel execution.
If you truly want parallel execution, you can invoke executeOnExecutor(Executor, Object[]) with THREAD_POOL_EXECUTOR.
Deprecated
Use the standard java.util.concurrent or Kotlin concurrency utilities instead.

源码分析

AsyncTask#execute:

1
2
3
4
@MainThread
public final AsyncTask<Params, Progress, Result> execute(Params... params) {
return executeOnExecutor(sDefaultExecutor, params);
}

executeOnExecutor

sDefaultExecutor 是 AsyncTask 内部的 SerialExecutor 类的一个实例,是一个串行的线程池。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
private static class SerialExecutor implements Executor {
final ArrayDeque<Runnable> mTasks = new ArrayDeque<Runnable>();
Runnable mActive;

public synchronized void execute(final Runnable r) {
mTasks.offer(new Runnable() {
public void run() {
try {
r.run();
} finally {
scheduleNext();
}
}
});
if (mActive == null) {
scheduleNext();
}
}

protected synchronized void scheduleNext() {
if ((mActive = mTasks.poll()) != null) {
THREAD_POOL_EXECUTOR.execute(mActive);
}
}
}

如果队列中还有 task 就,继续 execute。SerialExecutor 用于 task 的排队,THREAD_POOL_EXECUTOR 用于 task 的真正执行。

1
2
3
4
5
6
7
8
9
10
@Deprecated
public static final Executor THREAD_POOL_EXECUTOR;

static {
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(
CORE_POOL_SIZE, MAXIMUM_POOL_SIZE, KEEP_ALIVE_SECONDS, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>(), sThreadFactory);
threadPoolExecutor.setRejectedExecutionHandler(sRunOnSerialPolicy);
THREAD_POOL_EXECUTOR = threadPoolExecutor;
}

核心线程数 1,最大线程 20,超时时间 3s,阻塞队列是 SynchronousQueue。

还有一个内部类 InternalHandler 用于将线程池切换到主线程。

先调用 onPreEexcute,再调用 exec.execute,将一个 FutureTask 传进去。

AsyncTask 的构造方法中有个 mWorker,mWorker 的 call 最终将在线程池中执行:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
mWorker = new WorkerRunnable<Params, Result>() {
public Result call() throws Exception {
mTaskInvoked.set(true);
Result result = null;
try {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
//noinspection unchecked
result = doInBackground(mParams);
Binder.flushPendingCommands();
} catch (Throwable tr) {
mCancelled.set(true);
throw tr;
} finally {
postResult(result);
}
return result;
}
};

这里调用了 doInBackground 方法,得到 result,再传递给 postResult 方法:

1
2
3
4
5
6
7
private Result postResult(Result result) {
@SuppressWarnings("unchecked")
Message message = getHandler().obtainMessage(MESSAGE_POST_RESULT,
new AsyncTaskResult<Result>(this, result));
message.sendToTarget();
return result;
}

通过 mHandler,获得了一个 MESSAGE_POST_RESULT 的 msg:

1
2
3
mHandler = callbackLooper == null || callbackLooper == Looper.getMainLooper()
? getMainHandler()
: new Handler(callbackLooper);

getMainHandler:

1
2
3
4
5
6
7
8
private static Handler getMainHandler() {
synchronized (AsyncTask.class) {
if (sHandler == null) {
sHandler = new InternalHandler(Looper.getMainLooper());
}
return sHandler;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
private static class InternalHandler extends Handler {
public InternalHandler(Looper looper) {
super(looper);
}

@SuppressWarnings({"unchecked", "RawUseOfParameterizedType"})
@Override
public void handleMessage(Message msg) {
AsyncTaskResult<?> result = (AsyncTaskResult<?>) msg.obj;
switch (msg.what) {
case MESSAGE_POST_RESULT:
// There is only one result
result.mTask.finish(result.mData[0]);
break;
case MESSAGE_POST_PROGRESS:
result.mTask.onProgressUpdate(result.mData);
break;
}
}
}

sHandler 是一个静态对象,为了能将执行环境切换到主线程,就要在主线程创建。静态变量在类加载的时候进行初始化,因此要求 AsyncTask 要在主线程加载。

finish

1
2
3
4
5
6
7
8
private void finish(Result result) {
if (isCancelled()) {
onCancelled(result);
} else {
onPostExecute(result);
}
mStatus = Status.FINISHED;
}

没取消就调用 onPostExecute。

Message#sendToTarget

1
2
3
public void sendToTarget() {
target.sendMessage(this);
}

HandlerThread

继承自 Thread ,内部在 run 方法中,创建了 Looper,并 loop,这样在使用中就可以在 HandlerThread 中创建 Handler 了。通过 HandlerThread#getLooper 获得该线程的 Looper。通过 HandlerThread#getThreadHandler 方法可以获得 Thread 共享的 Handler。

普通的 Thread 用于在 run 方法中执行一个耗时任务,HandlerThread 内部创建了消息队列,外界需要使用 Handler 的消息方式来通知 ThreadHandler 来执行一个具体的任务。run 方法是一个无限循环,quit 或 quitSafely 来终止线程的执行。

IntentService

继承自 Service 拥有 Serivce 的特性,同时内部使用率 HandlerThread,当 onStart() 函数触发时,向 HandlerThread 中的 MessageQueue 中添加一条 Message ,自建的 Handler 处理完毕之后立即调用 stopSelf() 函数停止 Service 的运行。可以利用其在后台执行任务,执行完成之后不需要管理,自动停止。当然,记得和一般 Service 一样清单文件注册。

一些问题

线程超时后会怎样?

超时后将返回 null 的 Runnable,

ThreadPoolExecutor#getTask

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
private Runnable getTask() {
boolean timedOut = false; // Did the last poll() time out?

for (;;) {
int c = ctl.get();
int rs = runStateOf(c);

// Check if queue empty only if necessary.
if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
decrementWorkerCount();
return null;
}

int wc = workerCountOf(c);

// Are workers subject to culling?
boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;

if ((wc > maximumPoolSize || (timed && timedOut))
&& (wc > 1 || workQueue.isEmpty())) {
if (compareAndDecrementWorkerCount(c))
return null;
continue;
}

try {
Runnable r = timed ?
workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
workQueue.take();
if (r != null)
return r;
timedOut = true;
} catch (InterruptedException retry) {
timedOut = false;
}
}
}

LinkedBlockingQueue#poll

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public E poll(long timeout, TimeUnit unit) throws InterruptedException {
E x = null;
int c = -1;
long nanos = unit.toNanos(timeout);
final AtomicInteger count = this.count;
final ReentrantLock takeLock = this.takeLock;
takeLock.lockInterruptibly();
try {
while (count.get() == 0) {
if (nanos <= 0L)
return null;
nanos = notEmpty.awaitNanos(nanos);
}
x = dequeue();
c = count.getAndDecrement();
if (c > 1)
notEmpty.signal();
} finally {
takeLock.unlock();
}
if (c == capacity)
signalNotFull();
return x;
}

ThreadPoolExecutor#runWorkder

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
final void runWorker(Worker w) {
Thread wt = Thread.currentThread();
Runnable task = w.firstTask;
w.firstTask = null;
w.unlock(); // allow interrupts
boolean completedAbruptly = true;
try {
while (task != null || (task = getTask()) != null) {
w.lock();
// If pool is stopping, ensure thread is interrupted;
// if not, ensure thread is not interrupted. This
// requires a recheck in second case to deal with
// shutdownNow race while clearing interrupt
if ((runStateAtLeast(ctl.get(), STOP) ||
(Thread.interrupted() &&
runStateAtLeast(ctl.get(), STOP))) &&
!wt.isInterrupted())
wt.interrupt();
try {
beforeExecute(wt, task);
Throwable thrown = null;
try {
task.run();
} catch (RuntimeException x) {
thrown = x; throw x;
} catch (Error x) {
thrown = x; throw x;
} catch (Throwable x) {
thrown = x; throw new Error(x);
} finally {
afterExecute(task, thrown);
}
} finally {
task = null;
w.completedTasks++;
w.unlock();
}
}
completedAbruptly = false;
} finally {
processWorkerExit(w, completedAbruptly);
}
}

线程池中线程的销毁依赖 JVM 自动的回收,线程池做的工作是根据当前线程池的状态维护一定数量的线程引用,防止这部分线程被 JVM 回收,当线程池决定哪些线程需要回收时,只需要将其引用消除即可。Worker 被创建出来后,就会不断地进行轮询,然后获取任务去执行,核心线程可以无限等待获取任务,非核心线程要限时获取任务。当 Worker 无法获取到任务,也就是获取的任务为空时,循环会结束,Worker 会主动消除自身在线程池内的引用。

线程回收的工作是在 processWorkerExit 方法完成的:

参考

源码

Java线程池实现原理及其在美团业务中的实践 - 美团技术团队

关于 Handler 的一切

JDK1.8

概述

String 对象是不可变的。内部用一个 final 的 char 数组 value(private final char value[];)来存储字符串中的每个字符。String 的 concat、substring 方法都是 new 了一个新的 String 对象。

StringBuffer since JDK1.0,用来实现可变的字符串。StringBuffer 是线程安全的,大多数方法都被 synchronized 关键字修饰过。StringBuffer 中的比较主要的两个方法是 append 和 insert。

StringBuilder since JDK1.5。用来实现可变的字符串,但方法没有用 synchronized 关键字修饰,因此线程不安全,但效率更高。

String

常量池

1
2
3
4
5
6
7
8
9
10
11
12
13
public static void main(String[] args) {
String str = "ab";
String str1 = "a" + "b";
String str2 = new String("ab");
String str3 = "ab".intern();
String str4 = "a".intern() + "b".intern();
String str5 = new String("ab").intern();
System.out.println(str == str1); // true
System.out.println(str == str2); // false
System.out.println(str == str3); // true
System.out.println(str == str4); // false
System.out.println(str == str5); // true
}

运行时常量池是方法区的一部分,Class 文件中常量池表中的字符串字面量将在类加载后存放到运行时常量池。使用 new 的将会在 Java 堆中分配。

intern() 方法注释:

返回字符串对象的规范表示。

一个字符串池,最初是空的,由 String 类私有地维护。

当调用 intern 方法时,如果池中已经包含了一个由 equals(Object) 方法确定的与这个 String 对象相等的字符串,那么将返回池中的字符串。否则,这个 String 对象将被添加到池中,并返回对这个 String 对象的引用。

由此可见,对于任何两个字符串 s 和 t,如果且仅当 s.equals(t) 为真时, s.intern() == t.intern() 为真。

所有字面字符串和字符串值的常量表达式都会被 intern。字符串字面量的定义在 The Java™ 语言规范的 3.10.5 节中。

String#concat

1
2
3
4
5
6
7
8
9
10
public String concat(String str) {
int otherLen = str.length();
if (otherLen == 0) {
return this;
}
int len = value.length;
char buf[] = Arrays.copyOf(value, len + otherLen);
str.getChars(buf, len);
return new String(buf, true);
}

Arrays.copyOf(char[], int) 用来扩大一个数组的长度,返回一个新的字符数组:

1
2
3
4
5
6
public static char[] copyOf(char[] original, int newLength) {
char[] copy = new char[newLength];
System.arraycopy(original, 0, copy, 0,
Math.min(original.length, newLength));
return copy;
}

new 了一个新的长度的 char[],再调用 System.arrayCopy()

1
2
3
public static native void arraycopy(Object src,  int srcPos,
Object dest, int destPos,
int length);

是一个 native 方法,将 src 从 srcPos 位置 copy 到 dest 的 destPos 开始,copy length 长度。

上面我们 copy 了二者长度的较小值,得到了新的 buf[]。接着调用str.getChars(buf, len);

1
2
3
void getChars(char dst[], int dstBegin) {
System.arraycopy(value, 0, dst, dstBegin, value.length);
}

将要添加的字符串 copy 到处于 buf[]原字符串 len 之后的位置,copy 要添加的字符串的长度这么长。

最后以 buf[] 为参数构造了一个新的字符串返回。

StringBuilder

field

从 AbstractStringBuilder 中继承:

char[] value; 存储 SB 中的字符。

int count; value 数组中已经使用的数量。

private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;

构造方法

SB 初始容量是 16。

1
2
3
public StringBuilder() {
super(16);
}

StringBuilder#append

1
2
3
4
5
@Override
public StringBuilder append(String str) {
super.append(str);
return this;
}
1
2
3
4
5
6
7
8
9
public AbstractStringBuilder append(String str) {
if (str == null)
return appendNull();
int len = str.length();
ensureCapacityInternal(count + len);
str.getChars(0, len, value, count);
count += len;
return this;
}

如果传入的是 null,转换为字符串插进去:

1
2
3
4
5
6
7
8
9
10
11
private AbstractStringBuilder appendNull() {
int c = count;
ensureCapacityInternal(c + 4);
final char[] value = this.value;
value[c++] = 'n';
value[c++] = 'u';
value[c++] = 'l';
value[c++] = 'l';
count = c;
return this;
}

构造方法传 null 会报 NPE。

ensureCapacityInternal 也是用 Arrays.copyOf 增加长度:

1
2
3
4
5
6
7
private void ensureCapacityInternal(int minimumCapacity) {
// overflow-conscious code
if (minimumCapacity - value.length > 0) {
value = Arrays.copyOf(value,
newCapacity(minimumCapacity));
}
}

新的长度一般是原长度的 2 倍 + 2:

1
2
3
4
5
6
7
8
9
10
private int newCapacity(int minCapacity) {
// overflow-conscious code
int newCapacity = (value.length << 1) + 2;
if (newCapacity - minCapacity < 0) {
newCapacity = minCapacity;
}
return (newCapacity <= 0 || MAX_ARRAY_SIZE - newCapacity < 0)
? hugeCapacity(minCapacity)
: newCapacity;
}

接着调用了 String#getChars 的另一个重载方法:

1
2
3
4
5
6
7
8
9
10
11
12
public void getChars(int srcBegin, int srcEnd, char dst[], int dstBegin) {
if (srcBegin < 0) {
throw new StringIndexOutOfBoundsException(srcBegin);
}
if (srcEnd > value.length) {
throw new StringIndexOutOfBoundsException(srcEnd);
}
if (srcBegin > srcEnd) {
throw new StringIndexOutOfBoundsException(srcEnd - srcBegin);
}
System.arraycopy(value, srcBegin, dst, dstBegin, srcEnd - srcBegin);
}

str.getChars(0, len, value, count);将要添加的 str 的 value 数组 copy 给 SB 的 value,copy 到 count 之后的位置,copy len - 0 长度。将 str 接了上去。

下面更新 SB 中 value 数组已经使用的 count,返回更新后的 SB。

StringBuilder#toString

1
2
3
4
5
@Override
public String toString() {
// Create a copy, don't share the array
return new String(value, 0, count);
}

根据 SB 中的 value 中的 0 到 count 的数据构造一个 String 返回。

StringBuffer

大部分方法具体实现和 StringBuilder 差不多。

内部有一个 private transient char[] toStringCache;,用来缓存上一次 toString 返回的值,在 SB 修改时都会置 null。

1
2
3
4
5
6
7
@Override
public synchronized String toString() {
if (toStringCache == null) {
toStringCache = Arrays.copyOfRange(value, 0, count);
}
return new String(toStringCache, true);
}
1
2
3
4
5
6
7
8
9
public static char[] copyOfRange(char[] original, int from, int to) {
int newLength = to - from;
if (newLength < 0)
throw new IllegalArgumentException(from + " > " + to);
char[] copy = new char[newLength];
System.arraycopy(original, from, copy, 0,
Math.min(original.length - from, newLength));
return copy;
}

参考

源码

Java String 对象,你真的了解了吗?

1
implementation "com.squareup.okhttp3:okhttp:4.9.0"

基本用法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// 创建一个 OkHttpClient 
OkHttpClient okHttpClient = new OkHttpClient.Builder()
.build();
// 创建一个请求体 Request
Request request = new Request.Builder()
.url("http://wwww.baidu.com")
.get()
.build();
// 网络请求 Call
Call call = okHttpClient.newCall(request);
call.enqueue(new Callback() {
@Override
public void onFailure(@NotNull Call call, @NotNull IOException e) {
// 请求失败回调
}

@Override
public void onResponse(@NotNull Call call, @NotNull Response response) throws IOException {
// 请求成功回调
}
});

// call.execute()

核心处理流程图

来自:开源库—OkHttp 源码解析

172b3a24e88d0981

OkHttpClient 的创建

OkHttpClient 实例通过建造者模式创建。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
internal var dispatcher: Dispatcher = Dispatcher()
internal var connectionPool: ConnectionPool = ConnectionPool()
internal val interceptors: MutableList<Interceptor> = mutableListOf()
internal val networkInterceptors: MutableList<Interceptor> = mutableListOf()
internal var eventListenerFactory: EventListener.Factory = EventListener.NONE.asFactory()
internal var retryOnConnectionFailure = true
internal var authenticator: Authenticator = Authenticator.NONE
internal var followRedirects = true
internal var followSslRedirects = true
internal var cookieJar: CookieJar = CookieJar.NO_COOKIES
internal var cache: Cache? = null
internal var dns: Dns = Dns.SYSTEM
internal var proxy: Proxy? = null
internal var proxySelector: ProxySelector? = null
internal var proxyAuthenticator: Authenticator = Authenticator.NONE
internal var socketFactory: SocketFactory = SocketFactory.getDefault()
internal var sslSocketFactoryOrNull: SSLSocketFactory? = null
internal var x509TrustManagerOrNull: X509TrustManager? = null
internal var connectionSpecs: List<ConnectionSpec> = DEFAULT_CONNECTION_SPECS
internal var protocols: List<Protocol> = DEFAULT_PROTOCOLS
internal var hostnameVerifier: HostnameVerifier = OkHostnameVerifier
internal var certificatePinner: CertificatePinner = CertificatePinner.DEFAULT
internal var certificateChainCleaner: CertificateChainCleaner? = null
internal var callTimeout = 0
internal var connectTimeout = 10_000
internal var readTimeout = 10_000
internal var writeTimeout = 10_000
internal var pingInterval = 0
internal var minWebSocketMessageToCompress = RealWebSocket.DEFAULT_MINIMUM_DEFLATE_SIZE
internal var routeDatabase: RouteDatabase? = null

Request 创建

Request 同样通过建造者模式创建。

Call 创建

将 Request 实例传入 OkHttpClient 对象的 newCall 方法中创建 Call 对象。

1
2
/** Prepares the [request] to be executed at some point in the future. */
override fun newCall(request: Request): Call = RealCall(this, request, forWebSocket = false)

Call 是一个接口,这里创建的是 Call 的实现类 RealCall。

RealCall

execute 同步

1
2
3
4
5
6
7
8
9
10
11
12
override fun execute(): Response {
check(executed.compareAndSet(false, true)) { "Already Executed" }

timeout.enter()
callStart()
try {
client.dispatcher.executed(this)
return getResponseWithInterceptorChain()
} finally {
client.dispatcher.finished(this)
}
}

首先检查 AtomBoolean 类的变量 executed 是否已经执行,已经执行过会抛出异常。

超时计时。

调用 OkHttpClient 的 Dispatcher 的 executed 方法执行 Call,调用 getResponseWithInterceptorChain 获得 Response。

最后关闭 Call。

enqueue 异步

1
2
3
4
5
6
override fun enqueue(responseCallback: Callback) {
check(executed.compareAndSet(false, true)) { "Already Executed" }

callStart()
client.dispatcher.enqueue(AsyncCall(responseCallback))
}

将 Callback 封装为一个 AsyncCall。

1
2
3
4
5
6
7
8
9
10
11
12
13
internal fun enqueue(call: AsyncCall) {
synchronized(this) {
readyAsyncCalls.add(call)

// Mutate the AsyncCall so that it shares the AtomicInteger of an existing running call to
// the same host.
if (!call.call.forWebSocket) {
val existingCall = findExistingCallWithHost(call.host)
if (existingCall != null) call.reuseCallsPerHostFrom(existingCall)
}
}
promoteAndExecute()
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
/**
* Promotes eligible calls from [readyAsyncCalls] to [runningAsyncCalls] and runs them on the
* executor service. Must not be called with synchronization because executing calls can call
* into user code.
*
* @return true if the dispatcher is currently running calls.
*/
private fun promoteAndExecute(): Boolean {
this.assertThreadDoesntHoldLock()

val executableCalls = mutableListOf<AsyncCall>()
val isRunning: Boolean
synchronized(this) {
val i = readyAsyncCalls.iterator()
while (i.hasNext()) {
val asyncCall = i.next()

if (runningAsyncCalls.size >= this.maxRequests) break // Max capacity.
if (asyncCall.callsPerHost.get() >= this.maxRequestsPerHost) continue // Host max capacity.

i.remove()
asyncCall.callsPerHost.incrementAndGet()
executableCalls.add(asyncCall)
runningAsyncCalls.add(asyncCall)
}
isRunning = runningCallsCount() > 0
}

for (i in 0 until executableCalls.size) {
val asyncCall = executableCalls[i]
asyncCall.executeOn(executorService)
}

return isRunning
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
/**
* Attempt to enqueue this async call on [executorService]. This will attempt to clean up
* if the executor has been shut down by reporting the call as failed.
*/
fun executeOn(executorService: ExecutorService) {
client.dispatcher.assertThreadDoesntHoldLock()

var success = false
try {
executorService.execute(this)
success = true
} catch (e: RejectedExecutionException) {
val ioException = InterruptedIOException("executor rejected")
ioException.initCause(e)
noMoreExchanges(ioException)
responseCallback.onFailure(this@RealCall, ioException)
} finally {
if (!success) {
client.dispatcher.finished(this) // This call is no longer running!
}
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
override fun run() {
threadName("OkHttp ${redactedUrl()}") {
var signalledCallback = false
timeout.enter()
try {
val response = getResponseWithInterceptorChain()
signalledCallback = true
responseCallback.onResponse(this@RealCall, response)
} catch (e: IOException) {
if (signalledCallback) {
// Do not signal the callback twice!
Platform.get().log("Callback failure for ${toLoggableString()}", Platform.INFO, e)
} else {
responseCallback.onFailure(this@RealCall, e)
}
} catch (t: Throwable) {
cancel()
if (!signalledCallback) {
val canceledException = IOException("canceled due to $t")
canceledException.addSuppressed(t)
responseCallback.onFailure(this@RealCall, canceledException)
}
throw t
} finally {
client.dispatcher.finished(this)
}
}
}

调用 getResponseWithInterceptorChain() 获得 Response。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
@Throws(IOException::class)
internal fun getResponseWithInterceptorChain(): Response {
// Build a full stack of interceptors.
val interceptors = mutableListOf<Interceptor>()
interceptors += client.interceptors
interceptors += RetryAndFollowUpInterceptor(client)
interceptors += BridgeInterceptor(client.cookieJar)
interceptors += CacheInterceptor(client.cache)
interceptors += ConnectInterceptor
if (!forWebSocket) {
interceptors += client.networkInterceptors
}
interceptors += CallServerInterceptor(forWebSocket)

val chain = RealInterceptorChain(
call = this,
interceptors = interceptors,
index = 0,
exchange = null,
request = originalRequest,
connectTimeoutMillis = client.connectTimeoutMillis,
readTimeoutMillis = client.readTimeoutMillis,
writeTimeoutMillis = client.writeTimeoutMillis
)

var calledNoMoreExchanges = false
try {
val response = chain.proceed(originalRequest)
if (isCanceled()) {
response.closeQuietly()
throw IOException("Canceled")
}
return response
} catch (e: IOException) {
calledNoMoreExchanges = true
throw noMoreExchanges(e) as Throwable
} finally {
if (!calledNoMoreExchanges) {
noMoreExchanges(null)
}
}
}

这里先初始化 interceptor,再根据这些 interceptor 创建 RealInterceptorChain,调用 proceed 处理请求:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
@Throws(IOException::class)
override fun proceed(request: Request): Response {
check(index < interceptors.size)

calls++

if (exchange != null) {
check(exchange.finder.sameHostAndPort(request.url)) {
"network interceptor ${interceptors[index - 1]} must retain the same host and port"
}
check(calls == 1) {
"network interceptor ${interceptors[index - 1]} must call proceed() exactly once"
}
}

// Call the next interceptor in the chain.
val next = copy(index = index + 1, request = request)
val interceptor = interceptors[index]

@Suppress("USELESS_ELVIS")
val response = interceptor.intercept(next) ?: throw NullPointerException(
"interceptor $interceptor returned null")

if (exchange != null) {
check(index + 1 >= interceptors.size || next.calls == 1) {
"network interceptor $interceptor must call proceed() exactly once"
}
}

check(response.body != null) { "interceptor $interceptor returned a response with no body" }

return response
}

调用 copy 函数,获得一个 index = index + 1 的 RealInterceptorChain,接着获得 list 中 index 位置的 interceptor,接着调用 interceptor.intercept 得到 Response。

RetryAndFollowUpInterceptor

除了用户通过 OkHttpClient 设置的 interceptor,第一个加到列表中的就是 RetryAndFollowUpInterceptor,TA 的 intercept:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
@Throws(IOException::class)
override fun intercept(chain: Interceptor.Chain): Response {
val realChain = chain as RealInterceptorChain
var request = chain.request
val call = realChain.call
var followUpCount = 0
var priorResponse: Response? = null
var newExchangeFinder = true
var recoveredFailures = listOf<IOException>()
while (true) {
call.enterNetworkInterceptorExchange(request, newExchangeFinder)

var response: Response
var closeActiveExchange = true
try {
if (call.isCanceled()) {
throw IOException("Canceled")
}

try {
response = realChain.proceed(request)
newExchangeFinder = true
} catch (e: RouteException) {
// The attempt to connect via a route failed. The request will not have been sent.
if (!recover(e.lastConnectException, call, request, requestSendStarted = false)) {
throw e.firstConnectException.withSuppressed(recoveredFailures)
} else {
recoveredFailures += e.firstConnectException
}
newExchangeFinder = false
continue
} catch (e: IOException) {
// An attempt to communicate with a server failed. The request may have been sent.
if (!recover(e, call, request, requestSendStarted = e !is ConnectionShutdownException)) {
throw e.withSuppressed(recoveredFailures)
} else {
recoveredFailures += e
}
newExchangeFinder = false
continue
}

// Attach the prior response if it exists. Such responses never have a body.
if (priorResponse != null) {
response = response.newBuilder()
.priorResponse(priorResponse.newBuilder()
.body(null)
.build())
.build()
}

val exchange = call.interceptorScopedExchange
val followUp = followUpRequest(response, exchange)

if (followUp == null) {
if (exchange != null && exchange.isDuplex) {
call.timeoutEarlyExit()
}
closeActiveExchange = false
return response
}

val followUpBody = followUp.body
if (followUpBody != null && followUpBody.isOneShot()) {
closeActiveExchange = false
return response
}

response.body?.closeQuietly()

if (++followUpCount > MAX_FOLLOW_UPS) {
throw ProtocolException("Too many follow-up requests: $followUpCount")
}

request = followUp
priorResponse = response
} finally {
call.exitNetworkInterceptorExchange(closeActiveExchange)
}
}
}

可以看到这里调用了下一个 interceptor 的 proceed 方法。其它的 interceptor 也是这样,直到 list 中的最后一个 CallServerInterceptor:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
@Throws(IOException::class)
override fun intercept(chain: Interceptor.Chain): Response {
val realChain = chain as RealInterceptorChain
val exchange = realChain.exchange!!
val request = realChain.request
val requestBody = request.body
val sentRequestMillis = System.currentTimeMillis()

exchange.writeRequestHeaders(request)

var invokeStartEvent = true
var responseBuilder: Response.Builder? = null
if (HttpMethod.permitsRequestBody(request.method) && requestBody != null) {
// If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
// Continue" response before transmitting the request body. If we don't get that, return
// what we did get (such as a 4xx response) without ever transmitting the request body.
if ("100-continue".equals(request.header("Expect"), ignoreCase = true)) {
exchange.flushRequest()
responseBuilder = exchange.readResponseHeaders(expectContinue = true)
exchange.responseHeadersStart()
invokeStartEvent = false
}
if (responseBuilder == null) {
if (requestBody.isDuplex()) {
// Prepare a duplex body so that the application can send a request body later.
exchange.flushRequest()
val bufferedRequestBody = exchange.createRequestBody(request, true).buffer()
requestBody.writeTo(bufferedRequestBody)
} else {
// Write the request body if the "Expect: 100-continue" expectation was met.
val bufferedRequestBody = exchange.createRequestBody(request, false).buffer()
requestBody.writeTo(bufferedRequestBody)
bufferedRequestBody.close()
}
} else {
exchange.noRequestBody()
if (!exchange.connection.isMultiplexed) {
// If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 connection
// from being reused. Otherwise we're still obligated to transmit the request body to
// leave the connection in a consistent state.
exchange.noNewExchangesOnConnection()
}
}
} else {
exchange.noRequestBody()
}

if (requestBody == null || !requestBody.isDuplex()) {
exchange.finishRequest()
}
if (responseBuilder == null) {
responseBuilder = exchange.readResponseHeaders(expectContinue = false)!!
if (invokeStartEvent) {
exchange.responseHeadersStart()
invokeStartEvent = false
}
}
var response = responseBuilder
.request(request)
.handshake(exchange.connection.handshake())
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(System.currentTimeMillis())
.build()
var code = response.code
if (code == 100) {
// Server sent a 100-continue even though we did not request one. Try again to read the actual
// response status.
responseBuilder = exchange.readResponseHeaders(expectContinue = false)!!
if (invokeStartEvent) {
exchange.responseHeadersStart()
}
response = responseBuilder
.request(request)
.handshake(exchange.connection.handshake())
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(System.currentTimeMillis())
.build()
code = response.code
}

exchange.responseHeadersEnd(response)

response = if (forWebSocket && code == 101) {
// Connection is upgrading, but we need to ensure interceptors see a non-null response body.
response.newBuilder()
.body(EMPTY_RESPONSE)
.build()
} else {
response.newBuilder()
.body(exchange.openResponseBody(response))
.build()
}
if ("close".equals(response.request.header("Connection"), ignoreCase = true) ||
"close".equals(response.header("Connection"), ignoreCase = true)) {
exchange.noNewExchangesOnConnection()
}
if ((code == 204 || code == 205) && response.body?.contentLength() ?: -1L > 0L) {
throw ProtocolException(
"HTTP $code had non-zero Content-Length: ${response.body?.contentLength()}")
}
return response
}

是 chain 中的最后一个 interceptor,向 server 进行网络请求。这一个 interceptor 结束后,return Response,前面的 interceptor 再进行处理。

RetryAndFollowUpInterceptor

BridgeInterceptor

CacheInterceptor

ConnectInterceptor

CallServerInterceptor

基本流程

客户端(如浏览器)先发起一个 HTTPS 请求,服务端响应这个请求,并发送给客户端 TA 的证书。浏览器检查 CA 证书是否合法。不合法浏览器会给出警告。合法则浏览器生成一个随机数,并用 CA 证书里服务端的公钥加密随机数。接着浏览器将用公钥加密后的随机数传给服务端,服务端用对应的私钥解密加密后的随机数,得到随机数。接下来服务端通过随机数对接下来的通信数据进行对称加密,并把加密后的数据传给客户端。之后的内容都通过对称加密传输。客户端有随机数,得到加密后的数据可以解密。

细节

建立连接具体细节

先是 TCP 3 次握手,然后客户端再发一个 Client Hello 的包,然后服务端响应一个 Server Hello,接着再给客户端发送证书。

在 Client Hello 包里,会告知使用的 TLS 版本,一个随机数,支持的加密套件。

Server Hello 里有:选中的加密套件。

接着服务端一般会发送给客户端好几个证书。

CA 证书

认证机构用自己的私钥对

  1. 证书包含?

    颁发机构信息

    服务端公钥

    公司信息

    域名

    有效期

    指纹

  2. 证书验证链

非对称加密

对称加密

前向安全性

TLS (Transport Layer Security)传输层安全性协议

SSL(Secure Sockets Layer)安全套接层

疑问

降级

HTTPS 的 URL 是加密的吗?

非浏览器等客户端会如何“告警”?

如 SSH 需要手动验证签名是否正确。

告警后仍访问,数据会加密吗?

参考

再谈HTTPS

HTTPS 原理分析——带着疑问层层深入 | leapMie

Lifecycle 类 源码注释

Defines an object that has an Android Lifecycle. Fragment and FragmentActivity classes implement LifecycleOwner interface which has the getLifecycle method to access the Lifecycle. You can also implement LifecycleOwner in your own classes.

Event#ON_CREATE, Event#ON_START, Event#ON_RESUME events in this class are dispatched after the LifecycleOwner‘s related method returns. Event#ON_PAUSE, Event#ON_STOP, Event#ON_DESTROY events in this class are dispatched before the LifecycleOwner‘s related method is called. For instance, Event#ON_START will be dispatched after onStart returns, Event#ON_STOP will be dispatched before onStop is called. This gives you certain guarantees on which state the owner is in.

If you use Java 8 Language, then observe events with DefaultLifecycleObserver. To include it you should add "androidx.lifecycle:lifecycle-common-java8:<version>" to your build.gradle file.

1
2
3
4
5
6
class TestObserver implements DefaultLifecycleObserver {
@Override
public void onCreate(LifecycleOwner owner) {
// your code
}
}

If you use Java 7 Language, Lifecycle events are observed using annotations. Once Java 8 Language becomes mainstream on Android, annotations will be deprecated, so between DefaultLifecycleObserver and annotations, you must always prefer DefaultLifecycleObserver.

1
2
3
4
class TestObserver implements LifecycleObserver {
@OnLifecycleEvent(ON_STOP)
void onStopped() {}
}

Observer methods can receive zero or one argument. If used, the first argument must be of type LifecycleOwner. Methods annotated with Event#ON_ANY can receive the second argument, which must be of type Event.

1
2
3
4
5
6
class TestObserver implements LifecycleObserver {
@OnLifecycleEvent(ON_CREATE)
void onCreated(LifecycleOwner source) {}
@OnLifecycleEvent(ON_ANY)
void onAny(LifecycleOwner source, Event event) {}
}

These additional parameters are provided to allow you to conveniently observe multiple providers and events without tracking them manually.

1

Lifecycle 使用 Event 和 State 来管理生命周期状态的变化。

Lifecycle

LifecycleOnwer

接口。表示一个有 Android lifecycle 的类。

可以通过 getLifecycle() 获得 Lifecycle。

androidx 中的 ComponentActivity 和 Fragment 都实现了 LifecycleOwner 接口。

ComponentActivity 中的实现(androidx.activity:activity:1.1.0):

1
2
3
4
5
@NonNull
@Override
public Lifecycle getLifecycle() {
return mLifecycleRegistry;
}
1
private final LifecycleRegistry mLifecycleRegistry = new LifecycleRegistry(this);
1
2
3
4
5
6
7
8
9
10
@CallSuper
@Override
protected void onSaveInstanceState(@NonNull Bundle outState) {
Lifecycle lifecycle = getLifecycle();
if (lifecycle instanceof LifecycleRegistry) {
((LifecycleRegistry) lifecycle).setCurrentState(Lifecycle.State.CREATED);
}
super.onSaveInstanceState(outState);
mSavedStateRegistryController.performSave(outState);
}

LifecycleRegistry 是抽象类 Lifecycle 的实现类,可以处理多个 observer。

1
2
3
4
5
6
7
8
9
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mSavedStateRegistryController.performRestore(savedInstanceState);
ReportFragment.injectIfNeededIn(this);
if (mContentLayoutId != 0) {
setContentView(mContentLayoutId);
}
}

ReportFragment#injectIfNeededIn

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public static void injectIfNeededIn(Activity activity) {
if (Build.VERSION.SDK_INT >= 29) {
// On API 29+, we can register for the correct Lifecycle callbacks directly
activity.registerActivityLifecycleCallbacks(
new LifecycleCallbacks());
}
// Prior to API 29 and to maintain compatibility with older versions of
// ProcessLifecycleOwner (which may not be updated when lifecycle-runtime is updated and
// need to support activities that don't extend from FragmentActivity from support lib),
// use a framework fragment to get the correct timing of Lifecycle events
android.app.FragmentManager manager = activity.getFragmentManager();
if (manager.findFragmentByTag(REPORT_FRAGMENT_TAG) == null) {
manager.beginTransaction().add(new ReportFragment(), REPORT_FRAGMENT_TAG).commit();
// Hopefully, we are the first to make a transaction.
manager.executePendingTransactions();
}
}

这里创建了一个不可见的 ReportFragment 来监听 Activity 的生命周期。Fragment 是一个调度初始化事件的内部类。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
@Override
public void onActivityCreated(Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
dispatchCreate(mProcessListener);
dispatch(Lifecycle.Event.ON_CREATE);
}

@Override
public void onStart() {
super.onStart();
dispatchStart(mProcessListener);
dispatch(Lifecycle.Event.ON_START);
}

@Override
public void onResume() {
super.onResume();
dispatchResume(mProcessListener);
dispatch(Lifecycle.Event.ON_RESUME);
}

@Override
public void onPause() {
super.onPause();
dispatch(Lifecycle.Event.ON_PAUSE);
}

@Override
public void onStop() {
super.onStop();
dispatch(Lifecycle.Event.ON_STOP);
}

@Override
public void onDestroy() {
super.onDestroy();
dispatch(Lifecycle.Event.ON_DESTROY);
// just want to be sure that we won't leak reference to an activity
mProcessListener = null;
}

在 ReportFragment 中调用这些生命周期方法来 dispatch 生命周期事件。

1
2
3
4
5
6
7
8
private void dispatch(@NonNull Lifecycle.Event event) {
if (Build.VERSION.SDK_INT < 29) {
// Only dispatch events from ReportFragment on API levels prior
// to API 29. On API 29+, this is handled by the ActivityLifecycleCallbacks
// added in ReportFragment.injectIfNeededIn
dispatch(getActivity(), event);
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
@SuppressWarnings("deprecation")
static void dispatch(@NonNull Activity activity, @NonNull Lifecycle.Event event) {
// LifecycleRegistryOwner 已经 deprecated
if (activity instanceof LifecycleRegistryOwner) {
((LifecycleRegistryOwner) activity).getLifecycle().handleLifecycleEvent(event);
return;
}

if (activity instanceof LifecycleOwner) {
Lifecycle lifecycle = ((LifecycleOwner) activity).getLifecycle();
if (lifecycle instanceof LifecycleRegistry) {
((LifecycleRegistry) lifecycle).handleLifecycleEvent(event);
}
}
}

LifecycleRegistry#handleLifecycleEvent

1
2
3
4
public void handleLifecycleEvent(@NonNull Lifecycle.Event event) {
State next = getStateAfter(event);
moveToState(next);
}

getStateAfter 获得 event 后的 state:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
static State getStateAfter(Event event) {
switch (event) {
case ON_CREATE:
case ON_STOP:
return CREATED;
case ON_START:
case ON_PAUSE:
return STARTED;
case ON_RESUME:
return RESUMED;
case ON_DESTROY:
return DESTROYED;
case ON_ANY:
break;
}
throw new IllegalArgumentException("Unexpected event value " + event);
}

LifecycleRegistry#moveToState

1
2
3
4
5
6
7
8
9
10
11
12
13
14
private void moveToState(State next) {
if (mState == next) {
return;
}
mState = next;
if (mHandlingEvent || mAddingObserverCounter != 0) {
mNewEventOccurred = true;
// we will figure out what to do on upper level.
return;
}
mHandlingEvent = true;
sync();
mHandlingEvent = false;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// happens only on the top of stack (never in reentrance),
// so it doesn't have to take in account parents
private void sync() {
LifecycleOwner lifecycleOwner = mLifecycleOwner.get();
if (lifecycleOwner == null) {
throw new IllegalStateException("LifecycleOwner of this LifecycleRegistry is already"
+ "garbage collected. It is too late to change lifecycle state.");
}
while (!isSynced()) {
mNewEventOccurred = false;
// no need to check eldest for nullability, because isSynced does it for us.
if (mState.compareTo(mObserverMap.eldest().getValue().mState) < 0) {
backwardPass(lifecycleOwner);
}
Entry<LifecycleObserver, ObserverWithState> newest = mObserverMap.newest();
if (!mNewEventOccurred && newest != null
&& mState.compareTo(newest.getValue().mState) > 0) {
forwardPass(lifecycleOwner);
}
}
mNewEventOccurred = false;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
private void forwardPass(LifecycleOwner lifecycleOwner) {
Iterator<Entry<LifecycleObserver, ObserverWithState>> ascendingIterator =
mObserverMap.iteratorWithAdditions();
while (ascendingIterator.hasNext() && !mNewEventOccurred) {
Entry<LifecycleObserver, ObserverWithState> entry = ascendingIterator.next();
ObserverWithState observer = entry.getValue();
while ((observer.mState.compareTo(mState) < 0 && !mNewEventOccurred
&& mObserverMap.contains(entry.getKey()))) {
pushParentState(observer.mState);
observer.dispatchEvent(lifecycleOwner, upEvent(observer.mState));
popParentState();
}
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
private void backwardPass(LifecycleOwner lifecycleOwner) {
Iterator<Entry<LifecycleObserver, ObserverWithState>> descendingIterator =
mObserverMap.descendingIterator();
while (descendingIterator.hasNext() && !mNewEventOccurred) {
Entry<LifecycleObserver, ObserverWithState> entry = descendingIterator.next();
ObserverWithState observer = entry.getValue();
while ((observer.mState.compareTo(mState) > 0 && !mNewEventOccurred
&& mObserverMap.contains(entry.getKey()))) {
Event event = downEvent(observer.mState);
pushParentState(getStateAfter(event));
observer.dispatchEvent(lifecycleOwner, event);
popParentState();
}
}
}

摘要

ViewModel 版本:

1
implementation "androidx.lifecycle:lifecycle-viewmodel:2.2.0"

ViewModel 类 源码注释

ViewModel is a class that is responsible for preparing and managing the data for an Activity or a Fragment. It also handles the communication of the Activity / Fragment with the rest of the application (e.g. calling the business logic classes).

ViewModel 是一个负责准备和管理 Activity 或 Fragment 的数据的类。TA 也处理与应用中其它 Activity / Fragment 的交流。

A ViewModel is always created in association with a scope (a fragment or an activity) and will be retained as long as the scope is alive. E.g. if it is an Activity, until it is finished.

In other words, this means that a ViewModel will not be destroyed if its owner is destroyed for a configuration change (e.g. rotation). The new owner instance just re-connects to the existing model.

换句话说,这意味着如果 ViewModel 的 owner 因为配置改变被 destroy 了,ViewModel 不会。新的 owner 实例会重新连接上存在的 model。

The purpose of the ViewModel is to acquire and keep the information that is necessary for an Activity or a Fragment. The Activity or the Fragment should be able to observe changes in the ViewModel. ViewModels usually expose this information via LiveData or Android Data Binding. You can also use any observability construct from you favorite framework.

ViewModel 的目的是获得和保存 Activity / Fragment 的必要信息。Activity / Fragment 应该能 observe ViewModel 中的改变。ViewModel 经常通过 LiveData 或 Data Binding 暴露这些信息。你也可以使用任何你喜欢的框架中的 observability construct。

ViewModel’s only responsibility is to manage the data for the UI. It should never access your view hierarchy or hold a reference back to the Activity or the Fragment.

ViewModel 只负责管理 UI 的数据。TA 绝不应该访问 view,或保存 Activity / Fragment 的引用。

Typical usage from an Activity standpoint would be:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class UserActivity extends Activity {

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.user_activity_layout);
final UserModel viewModel = new ViewModelProvider(this).get(UserModel.class);
viewModel.getUser().observe(this, new Observer<User>() {
@Override
public void onChanged(@Nullable User data) {
// update ui.
}
});
findViewById(R.id.button).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
viewModel.doAction();
}
});
}
}

ViewModel would be:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class UserModel extends ViewModel {
private final MutableLiveData<User> userLiveData = new MutableLiveData<>();

public LiveData<User> getUser() {
return userLiveData;
}

public UserModel() {
// trigger user load.
}

void doAction() {
// depending on the action, do necessary business logic calls and update the
// userLiveData.
}
}

ViewModels can also be used as a communication layer between different Fragments of an Activity. Each Fragment can acquire the ViewModel using the same key via their Activity. This allows communication between Fragments in a de-coupled fashion such that they never need to talk to the other Fragment directly.

1
2
3
4
5
6
public class MyFragment extends Fragment {
public void onStart() {
UserModel userModel = new ViewModelProvider(requireActivity()).get(UserModel.class);
}
}

ViewModelProvider

构造方法

ViewModelProvider 有 3 种重载:

1
public ViewModelProvider(@NonNull ViewModelStoreOwner owner)

ViewModelStoreOwner 是一个接口, 代表拥有 ViewModelStore。AppCompatActivity 和 Fragment 都直接或间接实现了 ViewModelStoreOwner 接口。

而 ViewModelStore 是一个存储 ViewModel 的类。内部维护了一个 HashMap<String, ViewModel>,用来保存所有创建的 ViewModel。ViewModelStore 实例在配置改变的过程中必须保持:如果一个 ViewModelStore 的 owner 因为配置改变被 destroy 再 recreate,新的 owner 实例应该仍然持用旧的 ViewModelStore 实例。如果 owner 被 destroy 且不再重新创建,就应该调用 ViewModelStore 的 clear 方法,以通知 ViewModel 不再被使用。

1
public ViewModelProvider(@NonNull ViewModelStoreOwner owner, @NonNull Factory factory) {

上面两个方法都是调用下面一个方法:

1
public ViewModelProvider(@NonNull ViewModelStore store, @NonNull Factory factory) {

通过给定的 factory 创建 ViewModel,存储在 ViewModelStore 中。

ViewModelProvider#get()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
/**
* Returns an existing ViewModel or creates a new one in the scope (usually, a fragment or
* an activity), associated with this {@code ViewModelProvider}.
* <p>
* The created ViewModel is associated with the given scope and will be retained
* as long as the scope is alive (e.g. if it is an activity, until it is
* finished or process is killed).
*
* @param key The key to use to identify the ViewModel.
* @param modelClass The class of the ViewModel to create an instance of it if it is not
* present.
* @param <T> The type parameter for the ViewModel.
* @return A ViewModel that is an instance of the given type {@code T}.
*/
@SuppressWarnings("unchecked")
@NonNull
@MainThread
public <T extends ViewModel> T get(@NonNull String key, @NonNull Class<T> modelClass) {
ViewModel viewModel = mViewModelStore.get(key);

if (modelClass.isInstance(viewModel)) {
if (mFactory instanceof OnRequeryFactory) {
((OnRequeryFactory) mFactory).onRequery(viewModel);
}
return (T) viewModel;
} else {
//noinspection StatementWithEmptyBody
if (viewModel != null) {
// TODO: log a warning.
}
}
if (mFactory instanceof KeyedFactory) {
viewModel = ((KeyedFactory) (mFactory)).create(key, modelClass);
} else {
viewModel = (mFactory).create(modelClass);
}
mViewModelStore.put(key, viewModel);
return (T) viewModel;
}
1
2
3
4
5
6
7
8
9
10
11
@SuppressWarnings("ClassNewInstance")
@NonNull
@Override
public <T extends ViewModel> T create(@NonNull Class<T> modelClass) {
//noinspection TryWithIdenticalCatches
try {
return modelClass.newInstance();
} catch (InstantiationException e) {
throw new RuntimeException("Cannot create an instance of " + modelClass, e);
} catch (IllegalAccessException e) {
throw new RuntimeException("Cannot create an instance of " + modelClass, e);

ViewModel 还未创建时,即第 1 次调用此方法时,将会用 factory 通过反射创建一个 viewModel,再调用 mViewModelStore.put(key, viewModel) 将 viewModel 存入 ViewModelStore。

当 Activity / Fragment 因为配置改变再次创建时,会根据 key 从 ViewModelStore 中取回旧的 ViewModel,即实现了 ViewModel 类源码注释中的“新的 owner 实例会重新连接上存在的 model”。

ViewModelStore

ViewModelStore#clear()

1
2
3
4
5
6
7
8
9
/**
* Clears internal storage and notifies ViewModels that they are no longer used.
*/
public final void clear() {
for (ViewModel vm : mMap.values()) {
vm.clear();
}
mMap.clear();
}

调用 clear 来通知 ViewModel 不再使用 TA 们了,且 clear HashMap<String, ViewModel>

ComponentActivity 中:

1
2
3
4
5
6
7
8
9
10
11
getLifecycle().addObserver(new LifecycleEventObserver() {
@Override
public void onStateChanged(@NonNull LifecycleOwner source,
@NonNull Lifecycle.Event event) {
if (event == Lifecycle.Event.ON_DESTROY) {
if (!isChangingConfigurations()) {
getViewModelStore().clear();
}
}
}
});

只有在不是配置改变的 destroy 时,才会调用 ViewModelStore 的 clear。因此在配置改变时仍会保存 ViewModel 中数据。

FragmentManagerViewModel 中:

1
2
3
4
5
6
7
8
9
void clearNonConfigState(@NonNull Fragment f) {
...
// Clear and remove the Fragment's ViewModelStore
ViewModelStore viewModelStore = mViewModelStores.get(f.mWho);
if (viewModelStore != null) {
viewModelStore.clear();
mViewModelStores.remove(f.mWho);
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
void destroy(@NonNull FragmentHostCallback<?> host,
@NonNull FragmentManagerViewModel nonConfig) {
if (FragmentManager.isLoggingEnabled(Log.DEBUG)) {
Log.d(TAG, "movefrom CREATED: " + mFragment);
}
boolean beingRemoved = mFragment.mRemoving && !mFragment.isInBackStack();
boolean shouldDestroy = beingRemoved || nonConfig.shouldDestroy(mFragment);
if (shouldDestroy) {
boolean shouldClear;
if (host instanceof ViewModelStoreOwner) {
shouldClear = nonConfig.isCleared();
} else if (host.getContext() instanceof Activity) {
Activity activity = (Activity) host.getContext();
shouldClear = !activity.isChangingConfigurations();
} else {
shouldClear = true;
}
if (beingRemoved || shouldClear) {
nonConfig.clearNonConfigState(mFragment);
}
mFragment.performDestroy();
mDispatcher.dispatchOnFragmentDestroyed(mFragment, false);
} else {
mFragment.mState = Fragment.ATTACHED;
}
}

shouldClear = !activity.isChangingConfigurations();也是判断非配置改变导致的才会清除。

Kotlin 扩展

1
implementation "androidx.lifecycle:lifecycle-viewmodel-ktx:2.2.0"

Fragment 中:

1
2
3
import androidx.fragment.app.viewModels
...
private val viewModel: GalleryViewModel by viewModels()

Kotlin 提供的 Fragment 扩展函数封装了我们一般创建 ViewModel 的过程:

1
2
3
4
5
@MainThread
inline fun <reified VM : ViewModel> Fragment.viewModels(
noinline ownerProducer: () -> ViewModelStoreOwner = { this },
noinline factoryProducer: (() -> Factory)? = null
) = createViewModelLazy(VM::class, { ownerProducer().viewModelStore }, factoryProducer)
1
2
3
4
5
6
7
8
9
10
11
@MainThread
fun <VM : ViewModel> Fragment.createViewModelLazy(
viewModelClass: KClass<VM>,
storeProducer: () -> ViewModelStore,
factoryProducer: (() -> Factory)? = null
): Lazy<VM> {
val factoryPromise = factoryProducer ?: {
defaultViewModelProviderFactory
}
return ViewModelLazy(viewModelClass, storeProducer, factoryPromise)
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
class ViewModelLazy<VM : ViewModel> (
private val viewModelClass: KClass<VM>,
private val storeProducer: () -> ViewModelStore,
private val factoryProducer: () -> ViewModelProvider.Factory
) : Lazy<VM> {
private var cached: VM? = null

override val value: VM
get() {
val viewModel = cached
return if (viewModel == null) {
val factory = factoryProducer()
val store = storeProducer()
// 最终也是通过这个方法创建 ViewModel
ViewModelProvider(store, factory).get(viewModelClass.java).also {
cached = it
}
} else {
viewModel
}
}

override fun isInitialized() = cached != null
}

将 ViewModel 委托给viewModels(),这是一个延迟属性的委托。即第一次使用 ViewModel 时会创建,后续再使用会直接返回结果。

总结

思想:将 ViewModel 存储在 ViewModelStore 中,相对独立于 Activity / Fragment 之外。

参考

源码

ViewModel Overview | Android Developers

Jetpack 中的 ViewModel・Leo’s Studio

分时技术

主机以很短的“时间片”为单位,把 CPU 轮流分配给各个终端使用,直到全部作业被运行完。

CPU 的态

内核态

能够访问所有资源和执行所有指令

管理程序/OS内核

用户态

仅能访问部分资源,其它资源受限

用户程序

用户态与内核态之间的转换

临界资源 一次只允许一个进程独立访问的资源。

临界区 进程中访问临界资源的程序段。

系统调用

操作系统内核为应用程序提供的服务或函数。

访问临界区的方法:硬件方法 软件方法:锁 信号量

锁机制 设置“标志”表示临界资源是否可用。上锁 开锁 原语 原子性操作,不可中断

信号灯

进程管理

进程:非正式定义:运行中的程序。

进程的机器状态——程序在运行时可以读取或更新的内容:

内存、寄存器、打开的文件列表。

进程状态:

运行 running

就绪 ready

阻塞 blocked

进程调度

条件变量

当某些 condition 不满足时,线程可以把自己加入队列,waiting 该 condition。另外某个线程,当改变 condition 时,可以唤醒一个或多个线程(通过在 condition 上 signal,让 TA 们继续执行。

调用 wait 和 signal 时要持有锁。

生产者/消费者(有界缓冲区)问题

Mesa 语义。总是使用 while 循环。

信号量

二值信号量(锁)

信号量用作条件变量

生产者/消费者(有界缓冲区)问题

哲学家就餐问题

修改某个或某些哲学家的取餐叉顺序。

死锁

互斥:线程对于需要的资源进行互斥的访问。

持有并等待:线程持有了资源,同时又在等待其它资源。

非抢占:线程获得的资源不能被抢占。

循环等待:线程之间存在一个环路,环路上每个线程都额外持有一个资源,而这个资源又是下一个线程要申请的。

死锁的预防

循环等待

也许是最实用和最常用的,就是让代码不会产生循环等待。最直接的方法是获取锁的时候提供一个全序关系。如规定先申请 L1 锁,再申请 L2 锁。在复杂的难以做到全序的系统中,可以使用偏序。

持有并等待

通过原子地抢锁来避免。需要先获得 prevention 锁。

非抢占

trylock

可能会引起活锁,可以在循环等待的时候,先随机等待一个时间,再重复整个动作,可以降低线程间的重复循环干扰。

互斥

完全避免互斥。使用无等待(wait-free)的数据结构。如比较并交换 CAS。

通过调度避免死锁

检查和恢复

基于锁的并发数据结构

可扩展的计数——懒惰计数器 sloppy counter

线程:是操作系统调度的最小单元。一个进程中可以创建多个线程,每个线程都有各自的程序计数器、堆栈和局部变量等属性,可以访问共享的内存变量。

同步和 P-V 操作

API 30

Handler

源码注释

A Handler allows you to send and process Message and Runnable objects associated with a thread’s MessageQueue. Each Handler instance is associated with a single thread and that thread’s message queue. When you create a new Handler it is bound to a Looper. It will deliver messages and runnables to that Looper’s message queue and execute them on that Looper’s thread.

Handler 允许你发送和处理与线程 MessageQueue 关联的 Message 和 Runnable 对象。每个 Handler 实例与一个线程及该线程的 message queue 关联。但你创建一个新的 Handler,TA 与一个 Looper 绑定。Handler 将发送 message 和 runnable 给绑定 Looper 的 message queue,且在 Looper 的线程执行 message 和 runnale。

There are two main uses for a Handler: (1) to schedule messages and runnables to be executed at some point in the future; and (2) to enqueue an action to be performed on a different thread than your own.

Handler 有两个主要的用途:(1)调度 message 和 runnable 在未来的某个时间点执行,(2)将一个在你自己线程外执行的 action 入队。

Scheduling messages is accomplished with the post(Runnable), postAtTime(java.lang.Runnable, long), postDelayed(Runnable, Object, long), sendEmptyMessage(int), sendMessage(Message), sendMessageAtTime(Message, long), and sendMessageDelayed(Message, long) methods. The post versions allow you to enqueue Runnable objects to be called by the message queue when they are received; the sendMessage versions allow you to enqueue a Message object containing a bundle of data that will be processed by the Handler’s handleMessage(Message) method (requiring that you implement a subclass of Handler).

调度 message 通过 post(Runnable), postAtTime(java.lang.Runnable, long), postDelayed(Runnable, Object, long), sendEmptyMessage(int), sendMessage(Message), sendMessageAtTime(Message, long), 和sendMessageDelayed(Message, long)` 完成。post 版本的允许你将 Runnable 对象入队,在 message queue 收到时调用。sendMessage 版本的允许你将一个包含一些数据的 Message 对象入队,将在 Handler 的 handleMessage 方法中处理(需要你实现一个 Handler 的子类)。

When posting or sending to a Handler, you can either allow the item to be processed as soon as the message queue is ready to do so, or specify a delay before it gets processed or absolute time for it to be processed. The latter two allow you to implement timeouts, ticks, and other timing-based behavior.

当 Handler post 或 send 时,你可以让 TA 们在 message queue 准备好去做时处理或是指定一个被处理前的延迟或指定处理的绝对时间。后两者允许你实现 timeout,ticks 等基于时间的行为。

When a process is created for your application, its main thread is dedicated to running a message queue that takes care of managing the top-level application objects (activities, broadcast receivers, etc) and any windows they create. You can create your own threads, and communicate back with the main application thread through a Handler. This is done by calling the same post or sendMessage methods as before, but from your new thread. The given Runnable or Message will then be scheduled in the Handler’s message queue and processed when appropriate.

当创建了一个你的应用的线程后,TA 的主线程被用于运行一个 message queue,来仔细管理顶层的应用对象(activities, broadcast receivers, etc)和 TA 们创建的窗口。你可以创建你自己的线程,通过 Handler 与应用的主线程交流。通过在你的新线程调用前述的 post 和 sendMessage 方法完成。给定的 Runnable 或 Message 将会被在 Handler 的 message queue 中调度,将在合适的时候被处理。

Handler#post

将 Runnable 添加到 message queue。Runnable 将会运行在 Handler attach 的线程。

Handler#sendMessage

Handler#sendMessage 时将会设置 Message 的 target 为 this,将 Message 与 Handler 绑定。

1
2
3
public final boolean sendMessage(@NonNull Message msg) {
return sendMessageDelayed(msg, 0);
}
1
2
3
4
5
6
public final boolean sendMessageDelayed(@NonNull Message msg, long delayMillis) {
if (delayMillis < 0) {
delayMillis = 0;
}
return sendMessageAtTime(msg, SystemClock.uptimeMillis() + delayMillis);
}

这里会把 time 设置为开机时间加上设置的 delayMillis。

1
2
3
4
5
6
7
8
9
10
public boolean sendMessageAtTime(@NonNull Message msg, long uptimeMillis) {
MessageQueue queue = mQueue;
if (queue == null) {
RuntimeException e = new RuntimeException(
this + " sendMessageAtTime() called with no mQueue");
Log.w("Looper", e.getMessage(), e);
return false;
}
return enqueueMessage(queue, msg, uptimeMillis);
}
1
2
3
4
5
6
7
8
9
10
private boolean enqueueMessage(@NonNull MessageQueue queue, @NonNull Message msg,
long uptimeMillis) {
msg.target = this;
msg.workSourceUid = ThreadLocalWorkSource.getUid();

if (mAsynchronous) {
msg.setAsynchronous(true);
}
return queue.enqueueMessage(msg, uptimeMillis);
}

这里设置 msg.targe = this。Handler 的任务完成,每个消息的 uptimeMillis 就是延时时间。

Message#enqueueMessage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
boolean enqueueMessage(Message msg, long when) {
if (msg.target == null) {
throw new IllegalArgumentException("Message must have a target.");
}

synchronized (this) {
if (msg.isInUse()) {
throw new IllegalStateException(msg + " This message is already in use.");
}

if (mQuitting) {
IllegalStateException e = new IllegalStateException(
msg.target + " sending message to a Handler on a dead thread");
Log.w(TAG, e.getMessage(), e);
msg.recycle();
return false;
}

msg.markInUse();
msg.when = when;
Message p = mMessages;
boolean needWake;
if (p == null || when == 0 || when < p.when) {
// New head, wake up the event queue if blocked.
msg.next = p;
mMessages = msg;
needWake = mBlocked;
} else {
// Inserted within the middle of the queue. Usually we don't have to wake
// up the event queue unless there is a barrier at the head of the queue
// and the message is the earliest asynchronous message in the queue.
needWake = mBlocked && p.target == null && msg.isAsynchronous();
Message prev;
for (;;) {
prev = p;
p = p.next;
if (p == null || when < p.when) {
break;
}
if (needWake && p.isAsynchronous()) {
needWake = false;
}
}
msg.next = p; // invariant: p == prev.next
prev.next = msg;
}

// We can assume mPtr != 0 because mQuitting is false.
if (needWake) {
nativeWake(mPtr);
}
}
return true;
}

并不是所有 Message 的 target 必须不为 null。Handler 的同步屏障就是一个 target 为 null 的 msg,用来优先执行异步方法。

同步屏障一个很重要的应用就是接受垂直同步 vsync 信号,用来刷新界面。为了保证界面的流畅,每次刷新信号来时,其它任务先放一放,优先执行刷新界面的任务。

这个方法,将 msg 按照 uptimeMillis 组成一个链表,小的在前。mMessage 是头,如果为 null,将 msg 设为头。将 msg 按照 time 插到链表主要在 for 循环中,在 for 循环中找到 msg 的位置, msg.when >= p.when 的时候继续向后遍历。下面两行将 msg 插入链表。

Handler#dispatchMessage

handleCallback(msg) message.callback.run()

mCallback.handleMessage(msg)

handleMessage(msg) 默认是空方法,如果想要处理 msg 需要实现这个方法。

Looper

源码注释

Class used to run a message loop for a thread. Threads by default do not have a message loop associated with them; to create one, call prepare() in the thread that is to run the loop, and then loop() to have it process messages until the loop is stopped.

运行线程的 message loop 的类。线程默认没有一个与 TA 们相关联的 message loop。在线程中调用 prepare 来运行 loop,调用 loop() 来处理 message 直到 loop 停止。

Most interaction with a message loop is through the Handler class.

大多数与 message loop 的交互通过 Handler 类。

This is a typical example of the implementation of a Looper thread, using the separation of prepare() and loop() to create an initial Handler to communicate with the Looper.

这是一个 Looper 线程的实现的典型示例,使用单独的 prepare 和 loop 来创建初始的 Handler 来与 Looper 交流。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class LooperThread extends Thread {
public Handler mHandler;

public void run() {
Looper.prepare();

mHandler = new Handler(Looper.myLooper()) {
public void handleMessage(Message msg) {
// process incoming messages here
}
};

Looper.loop();
}
}

构造方法

在 Looper 的构造方法中初始化 MessageQueue。

Looper#prepare

调用构造方法,new 一个 Looper。将 Looper 存在 ThreadLocal 中。

Looper#loop

先从 ThreadLocal 中获取 Looper,再根据 Looper 获得 MessageQueue。再在一个无限循环中尝试获得下一个 Message。如果有则调用 msg.target.dispatchMessage(msg) 在 msg 对应 Handler 线程执行方法。

Message

源码注释

Defines a message containing a description and arbitrary data object that can be sent to a Handler. This object contains two extra int fields and an extra object field that allow you to not do allocations in many cases.

定义一个包含描述和任意数据对象的 message,可以被 Handler 发送。这个对象包含两个额外的 int 域,和一个额外的对象域,允许你在大多数情况下不分配。

While the constructor of Message is public, the best way to get one of these is to call Message.obtain() or one of the Handler.obtainMessage() methods, which will pull them from a pool of recycled objects.

尽管 Message的构造方法是 public 的,最好的获得一个 Message 的方法是调用 Message.obtain() 或 Handler.obtainMessage() 其中之一方法,这些方法可以从重用的对象池中得到 Message。

field

private static final int MAX_POOL_SIZE = 50; Message 缓存池最大 50 个。

MessageQueue

源码注释

Low-level class holding the list of messages to be dispatched by a Looper. Messages are not added directly to a MessageQueue, but rather through Handler objects associated with the Looper.

底层类,包含被 Looper 分发的 message 列表。Message 不是直接添加到 MessageQueue 的,而是通过与 Looper 关联的 Handler 对象。

You can retrieve the MessageQueue for the current thread with Looper.myQueue().

你可以通过 Looper.myQueue() 取得当前线程的 MessageQueue。

MessageQueue#next

有关延时部分:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@UnsupportedAppUsage
Message next() {
// Return here if the message loop has already quit and been disposed.
// This can happen if the application tries to restart a looper after quit
// which is not supported.
final long ptr = mPtr;
if (ptr == 0) {
return null;
}

int pendingIdleHandlerCount = -1; // -1 only during first iteration
int nextPollTimeoutMillis = 0;
for (;;) {
if (nextPollTimeoutMillis != 0) {
Binder.flushPendingCommands();
}

nativePollOnce(ptr, nextPollTimeoutMillis);

synchronized (this) {
// Try to retrieve the next message. Return if found.
final long now = SystemClock.uptimeMillis();
Message prevMsg = null;
Message msg = mMessages;
if (msg != null && msg.target == null) {
// Stalled by a barrier. Find the next asynchronous message in the queue.
do {
prevMsg = msg;
msg = msg.next;
} while (msg != null && !msg.isAsynchronous());
}
if (msg != null) {
if (now < msg.when) {
// Next message is not ready. Set a timeout to wake up when it is ready.
nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE);
} else {
...

这里一个核心变量 nextPollTimeoutMillis。计算出后就调用 nativePollOnce 这个 native 方法,休眠到下一次 msg 时执行。如果在这段时间内又插入一个 msg,会唤醒线程,重新计算插入,再走一次休眠。

ThreadLocal

源码注释

This class provides thread-local variables. These variables differ from their normal counterparts in that each thread that accesses one (via its get or set method) has its own, independently initialized copy of the variable. ThreadLocal instances are typically private static fields in classes that wish to associate state with a thread (e.g., a user ID or Transaction ID).
For example, the class below generates unique identifiers local to each thread. A thread’s id is assigned the first time it invokes ThreadId.get() and remains unchanged on subsequent calls.

这个类提供 thread-local 变量。这些变量与普通的对应变量不同的是,每个访问变量的线程(通过其 get 或 set 方法)都有自己的、独立初始化的变量副本。ThreadLocal 实例通常是希望将状态与线程关联起来的类中的私有静态字段(例如,用户 ID 或事务 ID )。
例如,下面的类为每个线程生成本地的唯一标识符。线程的 id 在它第一次调用 ThreadId.get() 时就会被分配,并在后续调用中保持不变。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import java.util.concurrent.atomic.AtomicInteger;

public class ThreadId {
// Atomic integer containing the next thread ID to be assigned
private static final AtomicInteger nextId = new AtomicInteger(0);

// Thread local variable containing each thread's ID
private static final ThreadLocal<Integer> threadId =
new ThreadLocal<Integer>() {
@Override protected Integer initialValue() {
return nextId.getAndIncrement();
}
};

// Returns the current thread's unique ID, assigning it if necessary
public static int get() {
return threadId.get();
}
}

Each thread holds an implicit reference to its copy of a thread-local variable as long as the thread is alive and the ThreadLocal instance is accessible; after a thread goes away, all of its copies of thread-local instances are subject to garbage collection (unless other references to these copies exist).

每个线程持有一个对 thread-local 变量副本的隐式引用,只要 thread 是 alive 的且 ThreadLocal 实例是可访问的。在 thread 消失后,所有的 thread-local 实例都要被垃圾收集(除非存在其它对这些副本的引用)。

ThreadLocalMap

ThreadLocalMap 是为维护 thread local 变量的定制 hash map。key 是 ThreadLocal<?>,value 是 Object(使用时会用泛型指定类型)。

ThreadLocalMap#Entry

ThreadLocalMap 中的内部类,表示一个 k-v 对。

1
2
3
4
5
6
7
8
9
static class Entry extends WeakReference<ThreadLocal<?>> {
/** The value associated with this ThreadLocal. */
Object value;

Entry(ThreadLocal<?> k, Object v) {
super(k);
value = v;
}
}

ThreadLocal#get

根据当前线程,取得 Thread 对象中的 ThreadLocalMap 对象。接着以 ThreadLocal 实例为 key,根据 ThreadLocal 的 hash 值算出在 table 中的下标 i,即可得到 Entry。接着从 Entry 中取得 value,并转换成泛型指定的类型。

1
2
3
4
5
6
7
8
9
10
11
12
13
public T get() {
Thread t = Thread.currentThread();
ThreadLocalMap map = getMap(t);
if (map != null) {
ThreadLocalMap.Entry e = map.getEntry(this);
if (e != null) {
@SuppressWarnings("unchecked")
T result = (T)e.value;
return result;
}
}
return setInitialValue();
}

e.get() : e 是一个 WeakReference,get 返回 ThreadLocal。

ThreadLocalMap#getEntry

1
2
3
4
5
6
7
8
private Entry getEntry(ThreadLocal<?> key) {
int i = key.threadLocalHashCode & (table.length - 1);
Entry e = table[i];
if (e != null && e.get() == key)
return e;
else
return getEntryAfterMiss(key, i, e);
}

ThreadLocal#set

根据当前线程,取得 Thread 对象中的 ThreadLocalMap 对象。调用 map.set 将 ThreadLocal 为 key,value 为值,执行一般的 hashmap 的 set 操作(计算 key 的 hash,更新 value 或新建 Entry。

1
2
3
4
5
6
7
8
public void set(T value) {
Thread t = Thread.currentThread();
ThreadLocalMap map = getMap(t);
if (map != null)
map.set(this, value);
else
createMap(t, value);
}

消息机制概述

Looper#prepare 后创建线程对应的 Looper。Looper 作为参数创建 Handler 对象,实现响应 Message 的方法。需要发送消息时,获得 Message 对象。再 Handler 发送 Message,把 Message 添加到 MessageQueue 中。Looper 在 loop 过程中知道有 MessageQueue 中消息发送来,将取出消息,调用 msg.target.dispatchMessage(msg) 在 msg 对应 Handler 线程执行响应 msg 的方法。

延时 Message

1
2
3
4
5
public final boolean sendEmptyMessageAtTime(int what, long uptimeMillis) {
Message msg = Message.obtain();
msg.what = what;
return sendMessageAtTime(msg, uptimeMillis);
}

一些问题

主线程的 Looper 无限循环为什么不会导致应用卡住?

对于线程即是一段可执行的代码,当可执行代码执行完成后,线程生命周期便该终止了,线程退出。而对于主线程肯定不能运行一段时间后就自动结束了,那么如何保证一直存活呢??简单的做法就是可执行代码能一直执行下去,死循环便能保证不会被退出,例如:binder线程也是采用死循环方法,通过循环方式不同与Binder驱动进行读写操作,当然并非简单的死循环,无消息时会休眠,但是死循环又如何处理其他事物呢??通过创建新的线程。真正卡死主线程操作的是在回调方法onCreate、onStart、onResume等操作时间过长,会导致掉帧甚至ANR,Looper.loop()本身不会导致应用卡死。

ActivityThread#main,在 loop 之前建立 binder 通道(创建新的线程)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
public static void main(String[] args) {
Trace.traceBegin(Trace.TRACE_TAG_ACTIVITY_MANAGER, "ActivityThreadMain");

// Install selective syscall interception
AndroidOs.install();

// CloseGuard defaults to true and can be quite spammy. We
// disable it here, but selectively enable it later (via
// StrictMode) on debug builds, but using DropBox, not logs.
CloseGuard.setEnabled(false);

Environment.initForCurrentUser();

// Make sure TrustedCertificateStore looks in the right place for CA certificates
final File configDir = Environment.getUserConfigDirectory(UserHandle.myUserId());
TrustedCertificateStore.setDefaultUserDirectory(configDir);

// Call per-process mainline module initialization.
initializeMainlineModules();

Process.setArgV0("<pre-initialized>");

Looper.prepareMainLooper();

// Find the value for {@link #PROC_START_SEQ_IDENT} if provided on the command line.
// It will be in the format "seq=114"
long startSeq = 0;
if (args != null) {
for (int i = args.length - 1; i >= 0; --i) {
if (args[i] != null && args[i].startsWith(PROC_START_SEQ_IDENT)) {
startSeq = Long.parseLong(
args[i].substring(PROC_START_SEQ_IDENT.length()));
}
}
}
ActivityThread thread = new ActivityThread();
// 建立Binder通道 (创建新线程)
thread.attach(false, startSeq);

if (sMainThreadHandler == null) {
sMainThreadHandler = thread.getHandler();
}

if (false) {
Looper.myLooper().setMessageLogging(new
LogPrinter(Log.DEBUG, "ActivityThread"));
}

// End of event ActivityThreadMain.
Trace.traceEnd(Trace.TRACE_TAG_ACTIVITY_MANAGER);
Looper.loop();

throw new RuntimeException("Main thread loop unexpectedly exited");
}

无限循环是不是十分消耗 CPU 资源?

主线程的死循环一直运行会不会特别消耗 CPU 资源呢?其实不然这里就涉及到 Linux pipe/epoll 机制,简单说就是在主线程的 MessageQueue 没有消息时,便阻塞在 loop 的 queue.next() 中的 nativePollOnce() 方法里,此时主线程会释放 CPU 资源进入休眠状态,直到下个消息到达或者有事务发生,通过往 pipe 管道写端写入数据来唤醒主线程工作。这里采用的 epoll 机制,是一种 IO 多路复用机制,可以同时监控多个描述符,当某个描述符就绪(读或写就绪),则立刻通知相应程序进行读或写操作,本质同步 I/O,即读写是阻塞的。 所以说,主线程大多数时候都是处于休眠状态,并不会消耗大量 CPU 资源。

Linux里的I/O多路复用机制:举个例子就是我们钓鱼的时候,为了保证可以最短的时间钓到最多的鱼,我们同一时间摆放多个鱼竿,同时钓鱼。然后哪个鱼竿有鱼儿咬钩了,我们就把哪个鱼竿上面的鱼钓起来。这里就是把这些全部message放到这个机制里面,那个time到了,就执行那个message。

epoll与select的区别:epoll获取事件的时候采用空间换时间的方式,类似与事件驱动,有哪个事件要执行,就通知epoll,所以获取的时间复杂度是O(1),select的话则是只知道有事件发生了,要通过O(n)的事件去轮询找到这个事件。

除了手动创建的一些子线程,Android 创建的其它线程?

可以获取最顶层的线程组递归打印一下:

熟悉 ThreadGroup 的同学会知道,在 ThreadGroup 下有两个静态成员变量,分别是systemThreadGroupmainThreadGroupmainThreadGroup其实也是systemThreadGroup的子线程组,所以我们只需要通过反射获取到systemThreadGroup对象然后递归打印就行了,代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
class MainActivity : AppCompatActivity() {

override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
fun printThreads(threadGroup: ThreadGroup) {
"group name: ${threadGroup.name}".logI()
// 本来想直接用反射获取子线程实例的,没想到threads被禁用了,好奇怪,源码里面明明没有@hide相关标识的
// threadGroup::class.get<Array<Thread?>?>(threadGroup, "threads")?.filterNotNull()?.forEach { "thread name: ${it.name}".logI() }
arrayOfNulls<Thread?>(threadGroup.activeCount()).apply { threadGroup.enumerate(this, false) }
.filterNotNull().forEach { "thread name: ${it.name}".logI() }
threadGroup::class.get<Array<ThreadGroup?>?>(threadGroup, "groups")?.filterNotNull()?.forEach { printThreads(it) }
}
printThreads(ThreadGroup::class.get(null, "systemThreadGroup")!!)
}

日志输出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
I/(MainActivity.kt:34) invoke: group name: system
I/(MainActivity.kt:36) invoke: thread name: Signal Catcher
I/(MainActivity.kt:36) invoke: thread name: HeapTaskDaemon
I/(MainActivity.kt:36) invoke: thread name: ReferenceQueueDaemon
I/(MainActivity.kt:36) invoke: thread name: FinalizerDaemon
I/(MainActivity.kt:36) invoke: thread name: FinalizerWatchdogDaemon
I/(MainActivity.kt:36) invoke: thread name: Profile Saver

I/(MainActivity.kt:34) invoke: group name: main
I/(MainActivity.kt:36) invoke: thread name: main
I/(MainActivity.kt:36) invoke: thread name: Jit thread pool worker thread 0
I/(MainActivity.kt:36) invoke: thread name: Binder:26573_1
I/(MainActivity.kt:36) invoke: thread name: Binder:26573_2
I/(MainActivity.kt:36) invoke: thread name: Binder:26573_3
I/(MainActivity.kt:36) invoke: thread name: Binder:26573_4
I/(MainActivity.kt:36) invoke: thread name: RenderThread
I/(MainActivity.kt:36) invoke: thread name: magnifier pixel copy result handler
I/(MainActivity.kt:36) invoke: thread name: queued-work-looper
I/(MainActivity.kt:36) invoke: thread name: DefaultDispatcher-worker-1
I/(MainActivity.kt:36) invoke: thread name: DefaultDispatcher-worker-2
I/(MainActivity.kt:36) invoke: thread name: DefaultDispatcher-worker-3

可以看到,现在进程内一共有两个线程组:system 和 main。

Signal Catcher,好像挺眼熟的,但源码中搜不到,好吧,知识盲区了,我投降。

接着往下看,有四个 Daemon 线程,随便选一个全局搜一下:

preview

它们都在一个叫 Daemons 的类里面。找到一篇文章:https://www.freesion.com/article/2406625468/

里面有解释这四个线程的作用:

  1. HeapTaskDaemon: 用来释放堆内存的;
  2. ReferenceQueueDaemon: 一些软、弱、虚引用的对象,当它们被回收时,会被添加到对应的 ReferenceQueue 中,这个操作就由 ReferenceQueueDaemon 来负责;
  3. FinalizerDaemon: 用来回调【即将被回收的对象】的finalize方法;
  4. FinalizerWatchdogDaemon: 监视 FinalizerDaemon 线程,如果在回调【即将被回收的对象】的finalize方法时超过了100_0000_0000纳秒(即10秒),那么进程会被强行kill掉;

最后一个,Profile Saver,不知道具体是做什么用的。

main 线程组中的线程比较多:

  1. main: 不用讲都知道是主线程;
  2. Jit thread pool worker thread 0:不知是在哪里创建的线程池;
  3. Binder:26573_1、Binder:26573_2、Binder:26573_3、Binder:26573_4:Bind通讯线程;
  4. RenderThread:用来同步渲染 BlockingGLTextureView 的线程;
  5. magnifier pixel copy result handler:不知道为什么会有这个;
  6. queued-work-looper:这是一个 HandlerThread (自带Looper);
  7. DefaultDispatcher-worker-123:因为我的测试 Demo 用了协程,这几个都是 coroutines 库中的线程池;

如何处理延时 Message 消息和按顺序执行?

这里分为两步处理的,第一步是在向 MessageQueue 中会根据消息加入时间和延时时间进行排序,加入时间在前和延时时间短的 Message 排在队列的前面。第二步是在 MessageQueue 获取到要执行的 Message 之后,会判断其执行时间是否是当前,若不是,则会计算时间差,使用该时间差调用 epoll 机制进入定时睡眠。

参考

源码

Android消息机制-Handler · Leo’s Studio

Android中为什么主线程不会因为Looper.loop()里的死循环卡死? - 知乎

multithreading - Android default threads and their use - Stack Overflow

每日一问 | 启动了Activity 的 app 至少有几个线程?-玩Android - wanandroid.com关于 Handler 的一切

Handler是如何实现延时消息的? - 简书

Android 11 API 30

源码注释

An activity is a single, focused thing that the user can do. Almost all activities interact with the user, so the Activity class takes care of creating a window for you in which you can place your UI with setContentView(View). While activities are often presented to the user as full-screen windows, they can also be used in other ways: as floating windows (via a theme with R.attr.windowIsFloating set), Multi-Window mode or embedded into other windows. There are two methods almost all subclasses of Activity will implement:

Activity 是用户可以做的单个的、获得关注的东西。几乎所有的 Activity 都与用户交互,所以 Activity 类为你创建 window,让你可以把 UI 通过 setContentView() 放在 window 里。尽管 Activity 经常是全屏地展现给用户,TA 们也可以通过其它方式使用:作为悬浮窗口(通过将主题设置为 R.attr.windowIsFloationg)、作为多窗口模式或是嵌套在其它窗口中。有两个方法几乎 Acitivity 的所有子类都会实现:

  • onCreate(Bundle) is where you initialize your activity. Most importantly, here you will usually call setContentView(int) with a layout resource defining your UI, and using findViewById(int) to retrieve the widgets in that UI that you need to interact with programmatically.

onCreate(Bundle) 是你初始化你的 Activity 的地方。最重要的是,在这你将调用 setContentView with 一个布局资源来定义你的 UI,和使用 findViewById 取得你需要的 UI 中的 widget,以用编程的方式与其交互。

  • onPause() is where you deal with the user pausing active interaction with the activity. Any changes made by the user should at this point be committed (usually to the ContentProvider holding the data). In this state the activity is still visible on screen.

onPause() 是你处理用户暂停与 Activity 的活跃的交互的地方。任何用户造成的改变应该在此时间点提交(经常是 ContentProvider 掌握数据)。在这个状态,Activity 仍然在屏幕上可见。

To be of use with Context.startActivity(), all activity classes must have a corresponding <activity> declaration in their package’s AndroidManifest.xml.

为了使用 Context.startActivity(),所有的 Activity 类必须在所在的 package 的 AndroidManifest.xml 中有对应的 <activity> 声明。

Topics covered here:

话题包括:

  1. Fragments
  2. Activity Lifecycle
  3. Configuration Changes
  4. Starting Activities and Getting Results
  5. Saving Persistent State
  6. Permissions
  7. Process Lifecycle

Developer Guides

The Activity class is an important part of an application’s overall lifecycle, and the way activities are launched and put together is a fundamental part of the platform’s application model. For a detailed perspective on the structure of an Android application and how activities behave, please read the Application Fundamentals and Tasks and Back Stack developer guides.

You can also find a detailed discussion about how to create activities in the Activities developer guide.

Fragments

The FragmentActivity subclass can make use of the Fragment class to better modularize their code, build more sophisticated user interfaces for larger screens, and help scale their application between small and large screens.

FragmentActivity 子类可以让 Fragment 类更好地模块化 TA 们的代码,为大屏构建更复杂的 UI,帮助 TA 们的应用在大的小的屏幕上缩放。

For more information about using fragments, read the Fragments developer guide.

Activity Lifecycle

Activities in the system are managed as activity stacks. When a new activity is started, it is usually placed on the top of the current stack and becomes the running activity – the previous activity always remains below it in the stack, and will not come to the foreground again until the new activity exits. There can be one or multiple activity stacks visible on screen.

可以有一个或多个 Activity 栈在屏幕上可见。

An activity has essentially four states:

  • If an activity is in the foreground of the screen (at the highest position of the topmost stack), it is active or running. This is usually the activity that the user is currently interacting with.
  • If an activity has lost focus but is still presented to the user, it is visible. It is possible if a new non-full-sized or transparent activity has focus on top of your activity, another activity has higher position in multi-window mode, or the activity itself is not focusable in current windowing mode. Such activity is completely alive (it maintains all state and member information and remains attached to the window manager).
  • If an activity is completely obscured by another activity, it is stopped or hidden. It still retains all state and member information, however, it is no longer visible to the user so its window is hidden and it will often be killed by the system when memory is needed elsewhere.
  • The system can drop the activity from memory by either asking it to finish, or simply killing its process, making it destroyed. When it is displayed again to the user, it must be completely restarted and restored to its previous state.

The following diagram shows the important state paths of an Activity. The square rectangles represent callback methods you can implement to perform operations when the Activity moves between states. The colored ovals are major states the Activity can be in.

State diagram for an Android Activity Lifecycle.

There are three key loops you may be interested in monitoring within your activity:

  • The entire lifetime of an activity happens between the first call to onCreate(Bundle) through to a single final call to onDestroy(). An activity will do all setup of “global” state in onCreate(), and release all remaining resources in onDestroy(). For example, if it has a thread running in the background to download data from the network, it may create that thread in onCreate() and then stop the thread in onDestroy().
  • The visible lifetime of an activity happens between a call to onStart() until a corresponding call to onStop(). During this time the user can see the activity on-screen, though it may not be in the foreground and interacting with the user. Between these two methods you can maintain resources that are needed to show the activity to the user. For example, you can register a BroadcastReceiver in onStart() to monitor for changes that impact your UI, and unregister it in onStop() when the user no longer sees what you are displaying. The onStart() and onStop() methods can be called multiple times, as the activity becomes visible and hidden to the user.
  • The foreground lifetime of an activity happens between a call to onResume() until a corresponding call to onPause(). During this time the activity is in visible, active and interacting with the user. An activity can frequently go between the resumed and paused states – for example when the device goes to sleep, when an activity result is delivered, when a new intent is delivered – so the code in these methods should be fairly lightweight.

The entire lifecycle of an activity is defined by the following Activity methods. All of these are hooks that you can override to do appropriate work when the activity changes state. All activities will implement onCreate(Bundle) to do their initial setup; many will also implement onPause() to commit changes to data and prepare to pause interacting with the user, and onStop() to handle no longer being visible on screen. You should always call up to your superclass when implementing these methods.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public class Activity extends ApplicationContext {
protected void onCreate(Bundle savedInstanceState);

protected void onStart();

protected void onRestart();

protected void onResume();

protected void onPause();

protected void onStop();

protected void onDestroy();
}

In general the movement through an activity’s lifecycle looks like this:

Method Description Killable? Next
onCreate() Called when the activity is first created. This is where you should do all of your normal static set up: create views, bind data to lists, etc. This method also provides you with a Bundle containing the activity’s previously frozen state, if there was one. Always followed by onStart().
这个方法也提供一个 Bundle,包含 Activity 之前冻结的动态,如果有。
No onStart()
onRestart() Called after your activity has been stopped, prior to it being started again. Always followed by onStart()
在 Activity 停止后,在 TA 重启之前。总是在 onStart() 之后。
No onStart()
onStart() Called when the activity is becoming visible to the user. Followed by onResume() if the activity comes to the foreground, or onStop() if it becomes hidden.
当 Activity 变得可见之时调用。如果 Activity 从后台返回前台, onResume() 会在之后调用。如果变得隐藏,onStop() 会接着调用。
onResume() or onStop()
onResume() Called when the activity will start interacting with the user. At this point your activity is at the top of its activity stack, with user input going to it. Always followed by onPause().
当 Activity 将开始与用户交互时调用。在这个时刻,你的 Activity 在栈顶,用户正在输入。后面总是跟着 onPause()。
No onPause()
onPause() Called when the activity loses foreground state, is no longer focusable or before transition to stopped/hidden or destroyed state. The activity is still visible to user, so it’s recommended to keep it visually active and continue updating the UI. Implementations of this method must be very quick because the next activity will not be resumed until this method returns. Followed by either onResume() if the activity returns back to the front, or onStop() if it becomes invisible to the user. Pre-Build.VERSION_CODES.HONEYCOMB onResume() or onStop()
onStop() Called when the activity is no longer visible to the user. This may happen either because a new activity is being started on top, an existing one is being brought in front of this one, or this one is being destroyed. This is typically used to stop animations and refreshing the UI, etc. Followed by either onRestart() if this activity is coming back to interact with the user, or onDestroy() if this activity is going away. Yes onRestart() or onDestroy()
onDestroy() The final call you receive before your activity is destroyed. This can happen either because the activity is finishing (someone called Activity#finish on it), or because the system is temporarily destroying this instance of the activity to save space. You can distinguish between these two scenarios with the isFinishing() method. Yes nothing

Note the “Killable” column in the above table – for those methods that are marked as being killable, after that method returns the process hosting the activity may be killed by the system at any time without another line of its code being executed. Because of this, you should use the onPause() method to write any persistent data (such as user edits) to storage. In addition, the method onSaveInstanceState(android.os.Bundle) is called before placing the activity in such a background state, allowing you to save away any dynamic instance state in your activity into the given Bundle, to be later received in onCreate(Bundle) if the activity needs to be re-created. See the Process Lifecycle section for more information on how the lifecycle of a process is tied to the activities it is hosting. Note that it is important to save persistent data in onPause() instead of onSaveInstanceState(Bundle) because the latter is not part of the lifecycle callbacks, so will not be called in every situation as described in its documentation.

Be aware that these semantics will change slightly between applications targeting platforms starting with Build.VERSION_CODES.HONEYCOMB vs. those targeting prior platforms. Starting with Honeycomb, an application is not in the killable state until its onStop() has returned. This impacts when onSaveInstanceState(android.os.Bundle) may be called (it may be safely called after onPause()) and allows an application to safely wait until onStop() to save persistent state.

For applications targeting platforms starting with Build.VERSION_CODES.P onSaveInstanceState(android.os.Bundle) will always be called after onStop(), so an application may safely perform fragment transactions in onStop() and will be able to save persistent state later.

For those methods that are not marked as being killable, the activity’s process will not be killed by the system starting from the time the method is called and continuing after it returns. Thus an activity is in the killable state, for example, between after onStop() to the start of onResume(). Keep in mind that under extreme memory pressure the system can kill the application process at any time.

Configuration Changes

If the configuration of the device (as defined by the Resources.Configuration class) changes, then anything displaying a user interface will need to update to match that configuration. Because Activity is the primary mechanism for interacting with the user, it includes special support for handling configuration changes.

如果设备的配置(定义在 Resource.Configuration 类中)改变,任何展示 UI 的东西都需要更新以符合配置。因为 Activity 是与用户交互的主要机制,所以包含处理配置改变的特殊支持。

Unless you specify otherwise, a configuration change (such as a change in screen orientation, language, input devices, etc) will cause your current activity to be destroyed, going through the normal activity lifecycle process of onPause(), onStop(), and onDestroy() as appropriate. If the activity had been in the foreground or visible to the user, once onDestroy() is called in that instance then a new instance of the activity will be created, with whatever savedInstanceState the previous instance had generated from onSaveInstanceState(Bundle).

This is done because any application resource, including layout files, can change based on any configuration value. Thus the only safe way to handle a configuration change is to re-retrieve all resources, including layouts, drawables, and strings. Because activities must already know how to save their state and re-create themselves from that state, this is a convenient way to have an activity restart itself with a new configuration.

In some special cases, you may want to bypass restarting of your activity based on one or more types of configuration changes. This is done with the android:configChanges attribute in its manifest. For any types of configuration changes you say that you handle there, you will receive a call to your current activity’s onConfigurationChanged(Configuration) method instead of being restarted. If a configuration change involves any that you do not handle, however, the activity will still be restarted and onConfigurationChanged(Configuration) will not be called.

在一些特殊情况,你可能想基于一种或多种类型的配置改变来绕过重启你的 Activity。通过 manifest 中的 android:configChanges 做到。对于任何在这里处理的配置改变,将收到对你目前的 Activity 的 onConfigurationChanged(Configuration) 方法调用,而不是重启。如果配置改变包括你没有处理的,Activity 仍会重启且 onConfigurationChanged(Configuration) 不会被调用。

Starting Activities and Getting Results

启动 Activity 和获得结果

The startActivity(Intent) method is used to start a new activity, which will be placed at the top of the activity stack. It takes a single argument, an Intent, which describes the activity to be executed.

startActivity(Intent) 方法被用作启动一个新的 Activity,将会被放在栈顶。方法需要单个 Intent 作为参数,参数描述了要被执行的 Activity。

Sometimes you want to get a result back from an activity when it ends. For example, you may start an activity that lets the user pick a person in a list of contacts; when it ends, it returns the person that was selected. To do this, you call the startActivityForResult(Intent, int) version with a second integer parameter identifying the call. The result will come back through your onActivityResult(int, int, Intent) method.

有时你想在一个 Activity 结束的时候获得返回的结果。 要做到这个,调用 startActivityForResult(Intent, int) 方法,第二个 int 参数辨别调用。结果会通过你的 onActivityResult(int, int, Intent) 方法返回。

When an activity exits, it can call setResult(int) to return data back to its parent. It must always supply a result code, which can be the standard results RESULT_CANCELED, RESULT_OK, or any custom values starting at RESULT_FIRST_USER. In addition, it can optionally return back an Intent containing any additional data it wants. All of this information appears back on the parent’s Activity.onActivityResult(), along with the integer identifier it originally supplied.

当一个 Activity 退出时,TA 可以调用 setResult(int) 来返回数据给父 Activity。方法必须提供一个 result code,可以是标准的结果 RESULT_CANCELED,RESULT_OK,或是任何自定义的值,从 RESULT_FIREST_USER 开始。此外,还可以选择返回一个包含任何想要的数据的 Intent。所有的这些信息在父 Activity.onActivityResut() 方法中出现,随着 TA 初始提供的 integer 标识符。

If a child activity fails for any reason (such as crashing), the parent activity will receive a result with the code RESULT_CANCELED.

如果子 Activity 因为某种原因失败了(例如崩溃),父 Activity 将收到随着 RESULT_CANCELED 的结果。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
public class MyActivity extends Activity {
...

static final int PICK_CONTACT_REQUEST = 0;

public boolean onKeyDown(int keyCode, KeyEvent event) {
if (keyCode == KeyEvent.KEYCODE_DPAD_CENTER) {
// When the user center presses, let them pick a contact.
startActivityForResult(
new Intent(Intent.ACTION_PICK,
new Uri("content://contacts")),
PICK_CONTACT_REQUEST);
return true;
}
return false;
}

protected void onActivityResult(int requestCode, int resultCode,
Intent data) {
if (requestCode == PICK_CONTACT_REQUEST) {
if (resultCode == RESULT_OK) {
// A contact was picked. Here we will just display it
// to the user.
startActivity(new Intent(Intent.ACTION_VIEW, data));
}
}
}
}

Saving Persistent State

There are generally two kinds of persistent state that an activity will deal with: shared document-like data (typically stored in a SQLite database using a content provider) and internal state such as user preferences.

For content provider data, we suggest that activities use an “edit in place” user model. That is, any edits a user makes are effectively made immediately without requiring an additional confirmation step. Supporting this model is generally a simple matter of following two rules:

  • When creating a new document, the backing database entry or file for it is created immediately. For example, if the user chooses to write a new email, a new entry for that email is created as soon as they start entering data, so that if they go to any other activity after that point this email will now appear in the list of drafts.
  • When an activity’s onPause() method is called, it should commit to the backing content provider or file any changes the user has made. This ensures that those changes will be seen by any other activity that is about to run. You will probably want to commit your data even more aggressively at key times during your activity’s lifecycle: for example before starting a new activity, before finishing your own activity, when the user switches between input fields, etc.

This model is designed to prevent data loss when a user is navigating between activities, and allows the system to safely kill an activity (because system resources are needed somewhere else) at any time after it has been stopped (or paused on platform versions before Build.VERSION_CODES.HONEYCOMB). Note this implies that the user pressing BACK from your activity does not mean “cancel” – it means to leave the activity with its current contents saved away. Canceling edits in an activity must be provided through some other mechanism, such as an explicit “revert” or “undo” option.

See the content package for more information about content providers. These are a key aspect of how different activities invoke and propagate data between themselves.

The Activity class also provides an API for managing internal persistent state associated with an activity. This can be used, for example, to remember the user’s preferred initial display in a calendar (day view or week view) or the user’s default home page in a web browser.

Activity persistent state is managed with the method getPreferences(int), allowing you to retrieve and modify a set of name/value pairs associated with the activity. To use preferences that are shared across multiple application components (activities, receivers, services, providers), you can use the underlying Context.getSharedPreferences() method to retrieve a preferences object stored under a specific name. (Note that it is not possible to share settings data across application packages – for that you will need a content provider.)

Here is an excerpt from a calendar activity that stores the user’s preferred view mode in its persistent settings:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public class CalendarActivity extends Activity {
...

static final int DAY_VIEW_MODE = 0;
static final int WEEK_VIEW_MODE = 1;

private SharedPreferences mPrefs;
private int mCurViewMode;

protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);

SharedPreferences mPrefs = getSharedPreferences();
mCurViewMode = mPrefs.getInt("view_mode", DAY_VIEW_MODE);
}

protected void onPause() {
super.onPause();

SharedPreferences.Editor ed = mPrefs.edit();
ed.putInt("view_mode", mCurViewMode);
ed.commit();
}
}

Permissions

The ability to start a particular Activity can be enforced when it is declared in its manifest’s <activity> tag. By doing so, other applications will need to declare a corresponding <uses-permission> element in their own manifest to be able to start that activity.

When starting an Activity you can set Intent.FLAG_GRANT_READ_URI_PERMISSION and/or Intent.FLAG_GRANT_WRITE_URI_PERMISSION on the Intent. This will grant the Activity access to the specific URIs in the Intent. Access will remain until the Activity has finished (it will remain across the hosting process being killed and other temporary destruction). As of Build.VERSION_CODES.GINGERBREAD, if the Activity was already created and a new Intent is being delivered to onNewIntent(android.content.Intent), any newly granted URI permissions will be added to the existing ones it holds.

See the Security and Permissions document for more information on permissions and security in general.

Process Lifecycle

The Android system attempts to keep an application process around for as long as possible, but eventually will need to remove old processes when memory runs low. As described in Activity Lifecycle, the decision about which process to remove is intimately tied to the state of the user’s interaction with it. In general, there are four states a process can be in based on the activities running in it, listed here in order of importance. The system will kill less important processes (the last ones) before it resorts to killing more important processes (the first ones).

  1. The foreground activity (the activity at the top of the screen that the user is currently interacting with) is considered the most important. Its process will only be killed as a last resort, if it uses more memory than is available on the device. Generally at this point the device has reached a memory paging state, so this is required in order to keep the user interface responsive.
  2. A visible activity (an activity that is visible to the user but not in the foreground, such as one sitting behind a foreground dialog or next to other activities in multi-window mode) is considered extremely important and will not be killed unless that is required to keep the foreground activity running.
  3. A background activity (an activity that is not visible to the user and has been stopped) is no longer critical, so the system may safely kill its process to reclaim memory for other foreground or visible processes. If its process needs to be killed, when the user navigates back to the activity (making it visible on the screen again), its onCreate(Bundle) method will be called with the savedInstanceState it had previously supplied in onSaveInstanceState(Bundle) so that it can restart itself in the same state as the user last left it.
  4. An empty process is one hosting no activities or other application components (such as Service or BroadcastReceiver classes). These are killed very quickly by the system as memory becomes low. For this reason, any background operation you do outside of an activity must be executed in the context of an activity BroadcastReceiver or Service to ensure that the system knows it needs to keep your process around.

Sometimes an Activity may need to do a long-running operation that exists independently of the activity lifecycle itself. An example may be a camera application that allows you to upload a picture to a web site. The upload may take a long time, and the application should allow the user to leave the application while it is executing. To accomplish this, your Activity should start a Service in which the upload takes place. This allows the system to properly prioritize your process (considering it to be more important than other non-visible applications) for the duration of the upload, independent of whether the original activity is paused, stopped, or finished.

startActivity(Intent)

Activity#startActivity(Intent)

Activity#startActivity(Intent, Bundle)

Activity#startActivityForResult(Intent, int)

Activity#startActivityForResult(Intent, int, Bundle)

Instrumentation#execStartActivity(Context, IBinder, IBinder, Activity, Intent, int, Bundle)

这里调用 ActivityTaskManager.getService().startActivity() 接着启动 Activity。getService() 返回了一个实现了 IActivityTaskManager (由 IActivityTaskManager.aidl 定义)的对象,ActivityTaskManagerService 继承了 IActivityTaskManager.Stub。

ActivityTaskManagerService 是管理 Activity 和 TA 的容器(task, stacks, displays, …)的 system service。

ActivityTaskManagerService#startActivity

ActivityTaskManagerService#startActivityAsUser

这里有:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// TODO: Switch to user app stacks here.
return getActivityStartController().obtainStarter(intent, "startActivityAsUser")
.setCaller(caller)
.setCallingPackage(callingPackage)
.setCallingFeatureId(callingFeatureId)
.setResolvedType(resolvedType)
.setResultTo(resultTo)
.setResultWho(resultWho)
.setRequestCode(requestCode)
.setStartFlags(startFlags)
.setProfilerInfo(profilerInfo)
.setActivityOptions(bOptions)
.setUserId(userId)
.execute();

getActivityStartController 得到的是 ActivityStartController。再 obtainStarter 得到的是 ActivityStarter。ActivityStarter 是用于解释如何启动一个 Activity,然后启动 Activity 的。

ActivityStarter#execute

executeRequest(mRequest) 执行一个 Activity 启动请求,启动一个 Activity 的启动旅程。在这里开始一些初步的检查。正常的 Activity 启动流程将从 startActivityUchecked 到 startActivityInner。

在方法的最后又调用了 startActivityUnchecked。调用这个方法就意味着大部分的初步检查已经完成且调用者已经确认了有必要的权限。这里同时确保了如果启动不成功会移除正在启动的 Activity。

startActivityInner 启动一个 Activity,确定 Activity 是否应该添加到现有 task 的栈顶或是传一个 intent 给已有的 Activity。并且操作 Activity task 成要求的或有效的 stack/display。

方法里重点有一个:

1
2
mRootWindowContainer.resumeFocusedStacksTopActivities(
mTargetStack, mStartActivity, mOptions);

RootWindowContainer#resumeFocusedStacksTopActivities 里调用了

ActivityStack#resumeTopActivityUncheckedLocked 确保栈顶的 Activity 是 resumed。

resumeTopActivityInnerLocked 主要负责上一个 Activity 的 pause 和下一个 Activity 的 resume 一系列操作。

ActivityStack#startPausingLocked pause 目前 resume 的 Activity。

1
2
3
mAtmService.getLifecycleManager().scheduleTransaction(prev.app.getThread(),
prev.appToken, PauseActivityItem.obtain(prev.finishing, userLeaving,
prev.configChangeFlags, pauseImmediately));

mAtmService.getLifecycleManager() 返回的是 ClientLifecycleManager 的实例。

ClientLifecycleManager 该类能组合多个生命周期请求和/或回调,并将 TA 们作为单个事务执行。

ClientLifecycleManager#scheduleTransaction

1
2
3
4
5
6
void scheduleTransaction(@NonNull IApplicationThread client, @NonNull IBinder activityToken,
@NonNull ActivityLifecycleItem stateRequest) throws RemoteException {
final ClientTransaction clientTransaction = transactionWithState(client, activityToken,
stateRequest);
scheduleTransaction(clientTransaction);
}
1
2
3
4
5
6
7
8
9
10
void scheduleTransaction(ClientTransaction transaction) throws RemoteException {
final IApplicationThread client = transaction.getClient();
transaction.schedule();
if (!(client instanceof Binder)) {
// If client is not an instance of Binder - it's a remote call and at this point it is
// safe to recycle the object. All objects used for local calls will be recycled after
// the transaction is executed on client in ActivityThread.
transaction.recycle();
}
}

ClientTransaction#schedule

ClientTransaction:保存一系列消息的容器,这些消息可能被发送到客户端,包括一个回调列表和一个最终的生命周期状态。

ActivityLifecycleItem:用以请求 Activity 应该到达的生命周期状态。继承自 ClientTransctionItem,主要的子类有 DestoryActivityItem、PauseActivityItem、StopActivityItem、ResumeActivityItem 等。将每一个生命周期的节点单独封装,虽然在代码层次和阅读上趋于复杂化了,但是更方便于对每一个生命周期进行单独维护而不必影响其他代码。

1
2
3
public void schedule() throws RemoteException {
mClient.scheduleTransaction(this);
}

mClient 是 IApplicationThread 类型。

IApplicationThread:是系统进程持有的 app 进程中 ApplicationThread 的 Binder 代理对象。系统进程通过 ProcessRecord.IApplicationThread 调用 app 进程相关方法。

ApplicationThread:ActivityThread 的内部类。AMS 通过 binder 代理调用到 ApplicationThread 中的方法后,通过主线程(ActivityThread 中的 main 方法)中开启的 handler 消息轮询来通知主线程调用相关方法。主线程的相关生命周期方法的具体实现会委托给 Instrumentation 类实现,在 Instrumentation 类中,会调用具体组件的相关生命周期方法。

ApplicationThread#scheduleTransaction

1
2
3
4
@Override
public void scheduleTransaction(ClientTransaction transaction) throws RemoteException {
ActivityThread.this.scheduleTransaction(transaction);
}

ActivityThread 的父类 ClientTransactionHandler:

ClientTransactionHandler#scheduleTransaction

1
2
3
4
5
/** Prepare and schedule transaction for execution. */
void scheduleTransaction(ClientTransaction transaction) {
transaction.preExecute(this);
sendMessage(ActivityThread.H.EXECUTE_TRANSACTION, transaction);
}

sendMessage 在父类中是抽象方法。

ActivityThread#H#handleMessage

1
2
3
4
5
6
7
8
9
10
11
case EXECUTE_TRANSACTION:
final ClientTransaction transaction = (ClientTransaction) msg.obj;
mTransactionExecutor.execute(transaction);
if (isSystem()) {
// Client transactions inside system process are recycled on the client side
// instead of ClientLifecycleManager to avoid being cleared before this
// message is handled.
transaction.recycle();
}
// TODO(lifecycler): Recycle locally scheduled transactions.
break;

mTransactionExecutorTransactionExecutor的实例。

  • TransactionExecutor:以正确的顺序管理事务执行的类。

TransactionExecutor#execute中调用executeCallbacksexecuteCallbacks中调用TransactionExecutor#cycleToPathcycleToPath中调用TransactionExecutor#performLifecycleSequence

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
/** Transition the client through previously initialized state sequence. */
private void performLifecycleSequence(ActivityClientRecord r, IntArray path,
ClientTransaction transaction) {
final int size = path.size();
for (int i = 0, state; i < size; i++) {
state = path.get(i);
if (DEBUG_RESOLVER) {
Slog.d(TAG, tId(transaction) + "Transitioning activity: "
+ getShortActivityName(r.token, mTransactionHandler)
+ " to state: " + getStateName(state));
}
switch (state) {
case ON_CREATE:
mTransactionHandler.handleLaunchActivity(r, mPendingActions,
null /* customIntent */);
break;
case ON_START:
mTransactionHandler.handleStartActivity(r.token, mPendingActions);
break;
case ON_RESUME:
mTransactionHandler.handleResumeActivity(r.token, false /* finalStateRequest */,
r.isForward, "LIFECYCLER_RESUME_ACTIVITY");
break;
case ON_PAUSE:
mTransactionHandler.handlePauseActivity(r.token, false /* finished */,
false /* userLeaving */, 0 /* configChanges */, mPendingActions,
"LIFECYCLER_PAUSE_ACTIVITY");
break;
case ON_STOP:
mTransactionHandler.handleStopActivity(r.token, 0 /* configChanges */,
mPendingActions, false /* finalStateRequest */,
"LIFECYCLER_STOP_ACTIVITY");
break;
case ON_DESTROY:
mTransactionHandler.handleDestroyActivity(r.token, false /* finishing */,
0 /* configChanges */, false /* getNonConfigInstance */,
"performLifecycleSequence. cycling to:" + path.get(size - 1));
break;
case ON_RESTART:
mTransactionHandler.performRestartActivity(r.token, false /* start */);
break;
default:
throw new IllegalArgumentException("Unexpected lifecycle state: " + state);
}
}
}

mTransactionHandler是类ClientTransactionHandler的实例对象,ClientTransactionHandler中的handleLaunchActivity是一个抽象方法,ActivityThread 作为其子类,实现了他的handleLaunchActivity方法,在ActivityThread#handleLaunchActivity方法中调用了ActivityThread#performLaunchActivity

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
private Activity performLaunchActivity(ActivityClientRecord r, Intent customIntent) {
>>>
Activity activity = null;
try {
java.lang.ClassLoader cl = appContext.getClassLoader();
activity = mInstrumentation.newActivity(
cl, component.getClassName(), r.intent);
StrictMode.incrementExpectedActivityCount(activity.getClass());
r.intent.setExtrasClassLoader(cl);
r.intent.prepareToEnterProcess();
if (r.state != null) {
r.state.setClassLoader(cl);
}
} catch (Exception e) {
>>>
}
>>>
if (r.isPersistable()) {
mInstrumentation.callActivityOnCreate(activity, r.state, r.persistentState);
} else {
mInstrumentation.callActivityOnCreate(activity, r.state);
}
>>>
}

第一段是 Activity 的创建,第二段是 onCreate 方法的执行。

Instrumentation#newActivity

1
2
3
4
5
6
7
8
9
public Activity newActivity(ClassLoader cl, String className,
Intent intent)
throws InstantiationException, IllegalAccessException,
ClassNotFoundException {
String pkg = intent != null && intent.getComponent() != null
? intent.getComponent().getPackageName() : null;
return getFactory(pkg).instantiateActivity(cl, className, intent);
}

AppComponentFactory#instantiateActivity

1
2
3
4
5
6
public @NonNull Activity instantiateActivity(@NonNull ClassLoader cl, @NonNull String className,
@Nullable Intent intent)
throws InstantiationException, IllegalAccessException, ClassNotFoundException {
return (Activity) cl.loadClass(className).newInstance();
}

由此可看出,Activity是通过类加载器去创建的实例。

Instrumentation#callActivityOnCreate

1
2
3
4
5
public void callActivityOnCreate(Activity activity, Bundle icicle) {
prePerformCreate(activity);
activity.performCreate(icicle);
postPerformCreate(activity);
}

Activity#performCreate

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@UnsupportedAppUsage
final void performCreate(Bundle icicle, PersistableBundle persistentState) {
dispatchActivityPreCreated(icicle);
mCanEnterPictureInPicture = true;
restoreHasCurrentPermissionRequest(icicle);
if (persistentState != null) {
onCreate(icicle, persistentState);
} else {
onCreate(icicle);
}
writeEventLog(LOG_AM_ON_CREATE_CALLED, "performCreate");
mActivityTransitionState.readState(icicle);

mVisibleFromClient = !mWindow.getWindowStyle().getBoolean(
com.android.internal.R.styleable.Window_windowNoDisplay, false);
mFragments.dispatchActivityCreated();
mActivityTransitionState.setEnterActivityOptions(this, getActivityOptions());
dispatchActivityPostCreated(icicle);
}

至此,就能看到我们所熟悉的onCreate方法了。

参考

源码

Android11中Activity的启动流程—从startActivity到onCreate_yu749942362的专栏-CSDN博客

深入研究源码:Activity启动流程分析

OpenCV 版本:4.5.1

源码架构

modules 目录下是主要的功能代码, 在此目录下又根据不同的功能分为几个模块。如:

core 定义了基本的数据结构,包括 Mat 和被其它模块使用的的基础函数。

imgproc 图像处理模块,包括线性和非线性滤波、图像的几何变换、色彩空间转换、直方图等等。

video 视频分析模块,包括运动估计、背景差分、对象追踪算法。

objdetect 检测预定义好的类的对象和实例,如面部、眼睛、人物、汽车等。

highgui 一个易于使用的简单的 UI 能力接口。

videoio 一个易于使用的视频获取和解码的接口。

每个模块内部又有大致如下结构,以 imgproc 模块为例:

doc 模块相关文档

include 模块的头文件

misc 一些杂项,一般是其它语言如 Java 、Python、Objective-C 的一些相关文件。

perf 包含性能测试的一些文件。

src 基本是功能实现代码。

test 测试文件。

Screenshot_20210114_220830

core 模块

core 定义了基本的数据结构,包括 Mat 和被其它模块使用的的基础函数。

在 OpevCV 中一个很重要的类就是 Mat。Mat 类替代了先前用来表示图像的 C 语言中 IplImage 和 CvMat 数据结构,不需要再手动管理内存。在 OpenCV 中被用来表示一副图像,同时也可用作普通矩阵。

Mat 类的定义在 modules/core/include/opencv2/core/mat.hpp 头文件中,实现在modules/core/src/matrix.cpp 中。

Mat 类的定义:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
class CV_EXPORTS Mat
{
public:
// 一系列函数
...
/*! 包括一些位域
- Mat 的标识
- 数据是否连续
- 深度
- 通道数目
*/
int flags;
//! 矩阵的维度,>= 2
int dims;
//! 矩阵的行数和列数,超过 2 维取值为 -1
int rows, cols;
//! 指向数据的指针
uchar* data;

//! 定位 ROI(Region of interest) 和调整 ROI 的帮助域
const uchar* datastart;
const uchar* dataend;
const uchar* datalimit;

//! 自定义分配器
MatAllocator* allocator;
//! 标准分配器
static MatAllocator* getStdAllocator();
static MatAllocator* getDefaultAllocator();
static void setDefaultAllocator(MatAllocator* allocator);

//! 与 UMat 的交互
UMatData* u;

MatSize size;
MatStep step;
...
};

Mat 的构造函数之一:

1
2
3
4
5
6
Mat::Mat(int _rows, int _cols, int _type)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), data(0), datastart(0), dataend(0),
datalimit(0), allocator(0), u(0), size(&rows), step(0)
{
create(_rows, _cols, _type);
}

创建行数为 _rows,列数为 _cols,类型为 _type 的 Mat。

其函数内部通过 create() 函数来创建对象:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
void Mat::create(int d, const int* _sizes, int _type)
{
int i;
// 断言检查
CV_Assert(0 <= d && d <= CV_MAX_DIM && _sizes);
// 通过宏,对 _type 进行运算获得 Mat 类型
_type = CV_MAT_TYPE(_type);

if( data && (d == dims || (d == 1 && dims <= 2)) && _type == type() )
{
if( d == 2 && rows == _sizes[0] && cols == _sizes[1] )
return;
for( i = 0; i < d; i++ )
if( size[i] != _sizes[i] )
break;
if( i == d && (d > 1 || size[1] == 1))
return;
}

int _sizes_backup[CV_MAX_DIM]; // #5991
if (_sizes == (this->size.p))
{
for(i = 0; i < d; i++ )
_sizes_backup[i] = _sizes[i];
_sizes = _sizes_backup;
}

release();
if( d == 0 )
return;
// 计算 flags
flags = (_type & CV_MAT_TYPE_MASK) | MAGIC_VAL;
// 设置 size
setSize(*this, d, _sizes, 0, true);

if( total() > 0 )
{
// 设置分配器
MatAllocator *a = allocator, *a0 = getDefaultAllocator();
#ifdef HAVE_TGPU
if( !a || a == tegra::getAllocator() )
a = tegra::getAllocator(d, _sizes, _type);
#endif
if(!a)
a = a0;
try
{
// 分配空间, data 初始化为 0
u = a->allocate(dims, size, _type, 0, step.p, ACCESS_RW /* ignored */, USAGE_DEFAULT);
CV_Assert(u != 0);
}
catch (...)
{
if (a == a0)
throw;
u = a0->allocate(dims, size, _type, 0, step.p, ACCESS_RW /* ignored */, USAGE_DEFAULT);
CV_Assert(u != 0);
}
CV_Assert( step[dims-1] == (size_t)CV_ELEM_SIZE(flags) );
}

// 增加引用计数
addref();
finalizeHdr(*this);
}

读取、写入、显示图像

从文件中读取图像:

1
Mat img = imread(filename);

灰度图像:

1
Mat img = imread(filename, IMREAD_GRAYSCALE);

将图像写入文件:

1
imwrite(filename, img);

展示图像:

1
imshow( "Title", img );

imread() 方法在 modules/imgcodecs/src/loadsave.cpp 中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Mat imread( const String& filename, int flags )
{
CV_TRACE_FUNCTION();

/// 创建基本的图像容器
Mat img;

/// 加载数据
imread_( filename, flags, img );

/// 如果 EXIF 文件设置了方向 flag 就应用
if( !img.empty() && (flags & IMREAD_IGNORE_ORIENTATION) == 0 && flags != IMREAD_UNCHANGED )
{
ApplyExifOrientation(filename, img);
}

/// 返回图像数据的引用
return img;
}

imread_() 方法实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
static bool
imread_( const String& filename, int flags, Mat& mat )
{
/// 找到对应的解码器处理图像
ImageDecoder decoder;

#ifdef HAVE_GDAL
if(flags != IMREAD_UNCHANGED && (flags & IMREAD_LOAD_GDAL) == IMREAD_LOAD_GDAL ){
decoder = GdalDecoder().newDecoder();
}else{
#endif
// 根据文件信息找到解码器
decoder = findDecoder( filename );
#ifdef HAVE_GDAL
}
#endif

/// 如果没找到对应的解码器,返回 nothing
if( !decoder ){
return 0;
}

// 缩放分母
int scale_denom = 1;
if( flags > IMREAD_LOAD_GDAL )
{
if( flags & IMREAD_REDUCED_GRAYSCALE_2 )
scale_denom = 2;
else if( flags & IMREAD_REDUCED_GRAYSCALE_4 )
scale_denom = 4;
else if( flags & IMREAD_REDUCED_GRAYSCALE_8 )
scale_denom = 8;
}

/// 设置缩放
decoder->setScale( scale_denom );

/// 设置文件名
decoder->setSource( filename );

try
{
// 读取文件头确保成功
if( !decoder->readHeader() )
return 0;
}
catch (const cv::Exception& e)
{
std::cerr << "imread_('" << filename << "'): can't read header: " << e.what() << std::endl << std::flush;
return 0;
}
catch (...)
{
std::cerr << "imread_('" << filename << "'): can't read header: unknown exception" << std::endl << std::flush;
return 0;
}


// 建立所需图像文件的大小
Size size = validateInputImageSize(Size(decoder->width(), decoder->height()));

// 获得解码类型
int type = decoder->type();
if( (flags & IMREAD_LOAD_GDAL) != IMREAD_LOAD_GDAL && flags != IMREAD_UNCHANGED )
{
if( (flags & IMREAD_ANYDEPTH) == 0 )
type = CV_MAKETYPE(CV_8U, CV_MAT_CN(type));

if( (flags & IMREAD_COLOR) != 0 ||
((flags & IMREAD_ANYCOLOR) != 0 && CV_MAT_CN(type) > 1) )
type = CV_MAKETYPE(CV_MAT_DEPTH(type), 3);
else
type = CV_MAKETYPE(CV_MAT_DEPTH(type), 1);
}

// 根据上面得到的信息创建存放图像的 Mat
mat.create( size.height, size.width, type );

// 开始读取图像数据
bool success = false;
try
{
//将数据放入容器
if (decoder->readData(mat))
success = true;
}
catch (const cv::Exception& e)
{
std::cerr << "imread_('" << filename << "'): can't read data: " << e.what() << std::endl << std::flush;
}
catch (...)
{
std::cerr << "imread_('" << filename << "'): can't read data: unknown exception" << std::endl << std::flush;
}
if (!success)
{
mat.release();
return false;
}

// 如果设置了缩放就进行调整
if( decoder->setScale( scale_denom ) > 1 ) // if decoder is JpegDecoder then decoder->setScale always returns 1
{
resize( mat, mat, Size( size.width / scale_denom, size.height / scale_denom ), 0, 0, INTER_LINEAR_EXACT);
}

return true;
}

写操作类似,找到对应的编码器,设置好文件名,具体的写入交给编码器来做。

imshow() 位于modules/highgui/src/window.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
void cv::imshow( const String& winname, InputArray _img )
{
CV_TRACE_FUNCTION();
const Size size = _img.size();
#ifndef HAVE_OPENGL
CV_Assert(size.width>0 && size.height>0);
{
Mat img = _img.getMat();
CvMat c_img = cvMat(img);
cvShowImage(winname.c_str(), &c_img);
}
#else
const double useGl = getWindowProperty(winname, WND_PROP_OPENGL);
CV_Assert(size.width>0 && size.height>0);

if (useGl <= 0)
{
Mat img = _img.getMat();
CvMat c_img = cvMat(img);
cvShowImage(winname.c_str(), &c_img);
}
else
{
const double autoSize = getWindowProperty(winname, WND_PROP_AUTOSIZE);

if (autoSize > 0)
{
resizeWindow(winname, size.width, size.height);
}

setOpenGlContext(winname);

cv::ogl::Texture2D& tex = ownWndTexs[winname];

if (_img.kind() == _InputArray::CUDA_GPU_MAT)
{
cv::ogl::Buffer& buf = ownWndBufs[winname];
buf.copyFrom(_img);
buf.setAutoRelease(false);

tex.copyFrom(buf);
tex.setAutoRelease(false);
}
else
{
tex.copyFrom(_img);
}

tex.setAutoRelease(false);

setOpenGlDrawCallback(winname, glDrawTextureCallback, &tex);

updateWindow(winname);
}
#endif
}

下面还有部分代码,总的逻辑是如果定义了 OpenGL,就使用 OpenGL 进行绘制。如果没有定义 OpenGL,下面还可以使用 Qt、Win32UI、GTK 等方式绘图。

视频

画图

图像的代数运算

OpenCV 提供了 add() subtract() multiply() divide() 等图像的加减乘除等运算。在modules/core/include/opencv2/core.hpp 中定义,实现在modules/core/src/arithm.cpp中。

比如 add() 源码:

1
2
3
4
5
6
7
void cv::add( InputArray src1, InputArray src2, OutputArray dst,
InputArray mask, int dtype )
{
CV_INSTRUMENT_REGION();

arithm_op(src1, src2, dst, mask, dtype, getAddTab(), false, 0, OCL_OP_ADD );
}

根据传入的图像参数和运算类型,调用arithm_op方法进行运算。

几何变换

直方图

滤波处理

filter2D

可以使用 2 维卷积来对 2D 图像进行平滑和锐化操作。OpenCV 中使用 filter2D() 方法对一幅 2D 图像进行卷积操作。

源码位于modules/imgproc/src/filter.dispatch.cpp,函数定义如下:

1
2
3
void filter2D(InputArray _src, OutputArray _dst, int ddepth,
InputArray _kernel, Point anchor0,
double delta, int borderType)

内部调用了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
void filter2D(int stype, int dtype, int kernel_type,
uchar * src_data, size_t src_step,
uchar * dst_data, size_t dst_step,
int width, int height,
int full_width, int full_height,
int offset_x, int offset_y,
uchar * kernel_data, size_t kernel_step,
int kernel_width, int kernel_height,
int anchor_x, int anchor_y,
double delta, int borderType,
bool isSubmatrix)
{
bool res;
res = replacementFilter2D(stype, dtype, kernel_type,
src_data, src_step,
dst_data, dst_step,
width, height,
full_width, full_height,
offset_x, offset_y,
kernel_data, kernel_step,
kernel_width, kernel_height,
anchor_x, anchor_y,
delta, borderType, isSubmatrix);
if (res)
return;

/*CV_IPP_RUN_FAST(ippFilter2D(stype, dtype, kernel_type,
src_data, src_step,
dst_data, dst_step,
width, height,
full_width, full_height,
offset_x, offset_y,
kernel_data, kernel_step,
kernel_width, kernel_height,
anchor_x, anchor_y,
delta, borderType, isSubmatrix))*/

res = dftFilter2D(stype, dtype, kernel_type,
src_data, src_step,
dst_data, dst_step,
width, height,
full_width, full_height,
offset_x, offset_y,
kernel_data, kernel_step,
kernel_width, kernel_height,
anchor_x, anchor_y,
delta, borderType);
if (res)
return;
ocvFilter2D(stype, dtype, kernel_type,
src_data, src_step,
dst_data, dst_step,
width, height,
full_width, full_height,
offset_x, offset_y,
kernel_data, kernel_step,
kernel_width, kernel_height,
anchor_x, anchor_y,
delta, borderType);
}

先尝试使用replacementFilter2D进行滤波处理。如果没有计算出结果,则使用dftFilter2D,基于 DFT (离散傅立叶变换)的滤波方式。如果还是没有计算出结果,则采用原始的方式进行计算。

blur

可以是使用blur()进行均值滤波。

比如 C++ 中:

1
blur( src, dst, Size( 3, 3 ), Point(-1,-1) );

源码位于modules/imgproc/src/box_filter.dispatch.cpp

1
2
3
4
5
6
7
void blur(InputArray src, OutputArray dst,
Size ksize, Point anchor, int borderType)
{
CV_INSTRUMENT_REGION();

boxFilter( src, dst, -1, ksize, anchor, true, borderType );
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
void boxFilter(InputArray _src, OutputArray _dst, int ddepth,
Size ksize, Point anchor,
bool normalize, int borderType)
{
CV_INSTRUMENT_REGION();

CV_Assert(!_src.empty());

CV_OCL_RUN(_dst.isUMat() &&
(borderType == BORDER_REPLICATE || borderType == BORDER_CONSTANT ||
borderType == BORDER_REFLECT || borderType == BORDER_REFLECT_101),
ocl_boxFilter3x3_8UC1(_src, _dst, ddepth, ksize, anchor, borderType, normalize))

CV_OCL_RUN(_dst.isUMat(), ocl_boxFilter(_src, _dst, ddepth, ksize, anchor, borderType, normalize))

Mat src = _src.getMat();
int stype = src.type(), sdepth = CV_MAT_DEPTH(stype), cn = CV_MAT_CN(stype);
if( ddepth < 0 )
ddepth = sdepth;
_dst.create( src.size(), CV_MAKETYPE(ddepth, cn) );
Mat dst = _dst.getMat();
if( borderType != BORDER_CONSTANT && normalize && (borderType & BORDER_ISOLATED) != 0 )
{
if( src.rows == 1 )
ksize.height = 1;
if( src.cols == 1 )
ksize.width = 1;
}

Point ofs;
Size wsz(src.cols, src.rows);
if(!(borderType&BORDER_ISOLATED))
src.locateROI( wsz, ofs );

CALL_HAL(boxFilter, cv_hal_boxFilter, src.ptr(), src.step, dst.ptr(), dst.step, src.cols, src.rows, sdepth, ddepth, cn,
ofs.x, ofs.y, wsz.width - src.cols - ofs.x, wsz.height - src.rows - ofs.y, ksize.width, ksize.height,
anchor.x, anchor.y, normalize, borderType&~BORDER_ISOLATED);

CV_OVX_RUN(true,
openvx_boxfilter(src, dst, ddepth, ksize, anchor, normalize, borderType))

//CV_IPP_RUN_FAST(ipp_boxfilter(src, dst, ksize, anchor, normalize, borderType));

borderType = (borderType&~BORDER_ISOLATED);

Ptr<FilterEngine> f = createBoxFilter( src.type(), dst.type(),
ksize, anchor, normalize, borderType );

f->apply( src, dst, wsz, ofs );
}

medianBlur

中值滤波使用中位数来代替模板中原点对应位置的值。

1
medianBlur ( src, dst, 3 );

源码位于modules/imgproc/src/median_blur.dispatch.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
void medianBlur( InputArray _src0, OutputArray _dst, int ksize )
{
CV_INSTRUMENT_REGION();

CV_Assert(!_src0.empty());

CV_Assert( (ksize % 2 == 1) && (_src0.dims() <= 2 ));

if( ksize <= 1 || _src0.empty() )
{
_src0.copyTo(_dst);
return;
}

// 尝试使用 OpenCL
CV_OCL_RUN(_dst.isUMat(),
ocl_medianFilter(_src0,_dst, ksize))

Mat src0 = _src0.getMat();
_dst.create( src0.size(), src0.type() );
Mat dst = _dst.getMat();

// 尝试使用 cv_hal_medianBlur,目前这是个空方法
CALL_HAL(medianBlur, cv_hal_medianBlur, src0.data, src0.step, dst.data, dst.step, src0.cols, src0.rows, src0.depth(),
src0.channels(), ksize);

// 尝试使用 OpenVX
CV_OVX_RUN(true,
openvx_medianFilter(_src0, _dst, ksize))

//CV_IPP_RUN_FAST(ipp_medianFilter(src0, dst, ksize));

CV_CPU_DISPATCH(medianBlur, (src0, dst, ksize),
CV_CPU_DISPATCH_MODES_ALL);
}

最后的 OpenCV 中内置的中值滤波的实现方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
void medianBlur(const Mat& src0, /*const*/ Mat& dst, int ksize)
{
CV_INSTRUMENT_REGION();

bool useSortNet = ksize == 3 || (ksize == 5);

Mat src;
if( useSortNet )
{
if( dst.data != src0.data )
src = src0;
else
src0.copyTo(src);

if( src.depth() == CV_8U )
medianBlur_SortNet<MinMax8u, MinMaxVec8u>( src, dst, ksize );
else if( src.depth() == CV_16U )
medianBlur_SortNet<MinMax16u, MinMaxVec16u>( src, dst, ksize );
else if( src.depth() == CV_16S )
medianBlur_SortNet<MinMax16s, MinMaxVec16s>( src, dst, ksize );
else if( src.depth() == CV_32F )
medianBlur_SortNet<MinMax32f, MinMaxVec32f>( src, dst, ksize );
else
CV_Error(CV_StsUnsupportedFormat, "");

return;
}
else
{
// TODO AVX guard (external call)
cv::copyMakeBorder( src0, src, 0, 0, ksize/2, ksize/2, BORDER_REPLICATE|BORDER_ISOLATED);

int cn = src0.channels();
CV_Assert( src.depth() == CV_8U && (cn == 1 || cn == 3 || cn == 4) );

double img_size_mp = (double)(src0.total())/(1 << 20);
if( ksize <= 3 + (img_size_mp < 1 ? 12 : img_size_mp < 4 ? 6 : 2)*
(CV_SIMD ? 1 : 3))
medianBlur_8u_Om( src, dst, ksize );
else
medianBlur_8u_O1( src, dst, ksize );
}
}

可以看到共有 3 个出口,如果是核的大小是 3 或 5,则使用 SortNet。如果是较小的核使用 Om 的算法,较大的核使用 O1 的算法。

再看下medianBlur_SortNet()方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
// Op 用于比较两个值的大小,VecOp 用于比较两个向量的大小
template<class Op, class VecOp>
static void
medianBlur_SortNet( const Mat& _src, Mat& _dst, int m )
{
CV_INSTRUMENT_REGION();

typedef typename Op::value_type T;
typedef typename Op::arg_type WT;
typedef typename VecOp::arg_type VT;
#if CV_SIMD_WIDTH > 16
typedef typename VecOp::warg_type WVT;
#endif

const T* src = _src.ptr<T>();
T* dst = _dst.ptr<T>();
int sstep = (int)(_src.step/sizeof(T));
int dstep = (int)(_dst.step/sizeof(T));
Size size = _dst.size();
int i, j, k, cn = _src.channels();
Op op;
VecOp vop;

if( m == 3 )
{
if( size.width == 1 || size.height == 1 )
{
// 处理特殊情况
return;
}

size.width *= cn;
// 行遍历
for( i = 0; i < size.height; i++, dst += dstep )
{
const T* row0 = src + std::max(i - 1, 0)*sstep;
const T* row1 = src + i*sstep;
const T* row2 = src + std::min(i + 1, size.height-1)*sstep;
int limit = cn;

// 列遍历
for(j = 0;; )
{
// 遍历 cn 个像素点,对每个通道都进行比较得到中值
for( ; j < limit; j++ )
{
int j0 = j >= cn ? j - cn : j;
int j2 = j < size.width - cn ? j + cn : j;
WT p0 = row0[j0], p1 = row0[j], p2 = row0[j2];
WT p3 = row1[j0], p4 = row1[j], p5 = row1[j2];
WT p6 = row2[j0], p7 = row2[j], p8 = row2[j2];

// 当 p1 < p2 时交换两个数字
op(p1, p2); op(p4, p5); op(p7, p8); op(p0, p1);
op(p3, p4); op(p6, p7); op(p1, p2); op(p4, p5);
op(p7, p8); op(p0, p3); op(p5, p8); op(p4, p7);
op(p3, p6); op(p1, p4); op(p2, p5); op(p4, p7);
op(p4, p2); op(p6, p4); op(p4, p2);
dst[j] = (T)p4;
}

if( limit == size.width )
break;

for( ; j <= size.width - VecOp::SIZE - cn; j += VecOp::SIZE )
{
VT p0 = vop.load(row0+j-cn), p1 = vop.load(row0+j), p2 = vop.load(row0+j+cn);
VT p3 = vop.load(row1+j-cn), p4 = vop.load(row1+j), p5 = vop.load(row1+j+cn);
VT p6 = vop.load(row2+j-cn), p7 = vop.load(row2+j), p8 = vop.load(row2+j+cn);

vop(p1, p2); vop(p4, p5); vop(p7, p8); vop(p0, p1);
vop(p3, p4); vop(p6, p7); vop(p1, p2); vop(p4, p5);
vop(p7, p8); vop(p0, p3); vop(p5, p8); vop(p4, p7);
vop(p3, p6); vop(p1, p4); vop(p2, p5); vop(p4, p7);
vop(p4, p2); vop(p6, p4); vop(p4, p2);
vop.store(dst+j, p4);
}

limit = size.width;
}
}
}
else if( m == 5 )
{
// 类似于 m == 3
}
}

形态学

有关形态学操作的代码在modules/imgproc/src/morph.dispatch.cpp中。

其中getStructuringElement()方法用于构造形态学操作的核。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
Mat getStructuringElement(int shape, Size ksize, Point anchor)
{
int i, j;
int r = 0, c = 0;
double inv_r2 = 0;

anchor = normalizeAnchor(anchor, ksize);
if( ksize == Size(1,1) )
shape = MORPH_RECT;
if( shape == MORPH_ELLIPSE )
{
r = ksize.height/2;
c = ksize.width/2;
inv_r2 = r ? 1./((double)r*r) : 0;
}

Mat elem(ksize, CV_8U);
for( i = 0; i < ksize.height; i++ )
{
uchar* ptr = elem.ptr(i);
int j1 = 0, j2 = 0;
/// 形状为矩形或者十字刚好处于横着的那一行时,直接填充一整行
if( shape == MORPH_RECT || (shape == MORPH_CROSS && i == anchor.y) )
j2 = ksize.width;
else if( shape == MORPH_CROSS )
j1 = anchor.x, j2 = j1 + 1;
else
{
int dy = i - r;
if( std::abs(dy) <= r )
{
/// 计算近似的椭圆的宽度
int dx = saturate_cast<int>(c*std::sqrt((r*r - dy*dy)*inv_r2));
j1 = std::max( c - dx, 0 );
j2 = std::min( c + dx + 1, ksize.width );
}
}
/// 填充 j1 ~ j2 区间内的数字
for( j = 0; j < j1; j++ )
ptr[j] = 0;
for( ; j < j2; j++ )
ptr[j] = 1;
for( ; j < ksize.width; j++ )
ptr[j] = 0;
}

return elem;
}

形态学中常见的腐蚀、膨胀操作:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
void erode( InputArray src, OutputArray dst, InputArray kernel,
Point anchor, int iterations,
int borderType, const Scalar& borderValue )
{
CV_INSTRUMENT_REGION();

CV_Assert(!src.empty());

morphOp( MORPH_ERODE, src, dst, kernel, anchor, iterations, borderType, borderValue );
}


void dilate( InputArray src, OutputArray dst, InputArray kernel,
Point anchor, int iterations,
int borderType, const Scalar& borderValue )
{
CV_INSTRUMENT_REGION();

CV_Assert(!src.empty());

morphOp( MORPH_DILATE, src, dst, kernel, anchor, iterations, borderType, borderValue );
}

morphOp() 方法中会提供一个空的 cv_hal_morph 方法供用户自行定义 morph 方法的实现。若没有自己定义的实现,则调用 opencv 内置提供的 ocvMorph 方法。和其他滤波器类似,在该方法中,调用了 createMorphologyFilter 得到一个 FilterEngine,最后调用 apply 方法进行计算。最后实际进行图形学滤波运算的是 MorphFilter 这样一个模板类:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

template<class Op, class VecOp> struct MorphFilter : BaseFilter
{
typedef typename Op::rtype T;
MorphFilter( const Mat& _kernel, Point _anchor ) { ... }

void operator()(const uchar** src, uchar* dst, int dststep, int count, int width, int cn) CV_OVERRIDE
{
...
width *= cn;
/// 遍历每一行
for( ; count > 0; count--, dst += dststep, src++ )
{
...
/// 遍历每一列
for( ; i < width; i++ )
{
T s0 = kp[0][i];
/// 滤波操作
for( k = 1; k < nz; k++ )
s0 = op(s0, kp[k][i]);
D[i] = s0;
}
}
}
};

对于 erode 和 dilate 两种操作,只需要分别传入 MinOp(返回值更小的那个) 和 MaxOp(返回值更大的那个) 即可。以 erode 为例,传入 MinOp 之后,对于核上每一个为 1 的点,覆盖到图像上的对应位置也必须为 1,否则由于 min 操作的特性,只要有一个是 0 最后的结果就会是 0,这个操作的结果就是,将核中心放在结果图像上任意一个为 1 的点,都能够被原图像包裹,即结果图像是源图像的腐蚀。膨胀则使用最大值,分析类似。

形态学中的其它操作基本都转化为 erode 和 dilate 操作,由 morphologyEx() 方法可见:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
switch( op )
{
case MORPH_ERODE: /// 腐蚀操作
erode( src, dst, kernel, anchor, iterations, borderType, borderValue );
break;
case MORPH_DILATE: /// 扩张操作
dilate( src, dst, kernel, anchor, iterations, borderType, borderValue );
break;
case MORPH_OPEN: /// 开操作
erode( src, dst, kernel, anchor, iterations, borderType, borderValue );
dilate( dst, dst, kernel, anchor, iterations, borderType, borderValue );
break;
case MORPH_CLOSE: /// 闭操作
dilate( src, dst, kernel, anchor, iterations, borderType, borderValue );
erode( dst, dst, kernel, anchor, iterations, borderType, borderValue );
break;
case MORPH_GRADIENT: /// 梯度计算操作
erode( src, temp, kernel, anchor, iterations, borderType, borderValue );
dilate( src, dst, kernel, anchor, iterations, borderType, borderValue );
dst -= temp;
break;
case MORPH_TOPHAT: /// 顶帽操作
if( src.data != dst.data )
temp = dst;
erode( src, temp, kernel, anchor, iterations, borderType, borderValue );
dilate( temp, temp, kernel, anchor, iterations, borderType, borderValue );
dst = src - temp;
break;
case MORPH_BLACKHAT: /// 黑帽操作
if( src.data != dst.data )
temp = dst;
dilate( src, temp, kernel, anchor, iterations, borderType, borderValue );
erode( temp, temp, kernel, anchor, iterations, borderType, borderValue );
dst = temp - src;
break;
...
}

图像分割

Canny 边缘检测

OpenCV 中实施 Canny 边缘检测的一般形式:

1
Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );

源代码位于modules/imgproc/src/canny.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
// image	8 bit 输入图像
// edges 输出的图像边缘,单通道 8 bit 图像,和原图像有同样大小
// threshold1 低阈值
// threshold2 高阈值
// apertureSize Sobel 孔径大小
// L2gradient flag,是否采用更精确的 L2 梯度
void Canny( InputArray _src, OutputArray _dst,
double low_thresh, double high_thresh,
int aperture_size, bool L2gradient )
{

// 省略一些检查和初始化工作

const Size size = _src.size();

_dst.create(size, CV_8U);

Mat src0 = _src.getMat(), dst = _dst.getMat();
Mat src(src0.size(), src0.type(), src0.data, src0.step);

/// 尝试是否可以通过 OpenCL、HAL、OpenVX、IPP 完成 Canny 算法的计算
CV_OCL_RUN(...)
CALL_HAL(...);
CV_OVX_RUN(...)
CV_IPP_RUN_FAST(...)

// 如果使用 L2 梯度,需要修正 thresh
if (L2gradient)
{
low_thresh = std::min(32767.0, low_thresh);
high_thresh = std::min(32767.0, high_thresh);

if (low_thresh > 0) low_thresh *= low_thresh;
if (high_thresh > 0) high_thresh *= high_thresh;
}
int low = cvFloor(low_thresh);
int high = cvFloor(high_thresh);

// If Scharr filter: aperture size is 3, ksize2 is 1
int ksize2 = aperture_size < 0 ? 1 : aperture_size / 2;
// 计算可用的线程数
int numOfThreads = std::max(1, std::min(getNumThreads(), getNumberOfCPUs()));
// Make a fallback for pictures with too few rows.
int grainSize = src.rows / numOfThreads;
int minGrainSize = 2 * (ksize2 + 1);
if (grainSize < minGrainSize)
numOfThreads = std::max(1, src.rows / minGrainSize);

Mat map;
std::deque<uchar*> stack;

// 并行计算 Canny 算法
parallel_for_(Range(0, src.rows), parallelCanny(src, map, stack, low, high, aperture_size, L2gradient), numOfThreads);

CV_TRACE_REGION("global_hysteresis");
// 全局进行 edge track
ptrdiff_t mapstep = map.cols;

while (!stack.empty())
{
uchar* m = stack.back();
stack.pop_back();

if (!m[-mapstep-1]) CANNY_PUSH((m-mapstep-1), stack);
if (!m[-mapstep]) CANNY_PUSH((m-mapstep), stack);
if (!m[-mapstep+1]) CANNY_PUSH((m-mapstep+1), stack);
if (!m[-1]) CANNY_PUSH((m-1), stack);
if (!m[1]) CANNY_PUSH((m+1), stack);
if (!m[mapstep-1]) CANNY_PUSH((m+mapstep-1), stack);
if (!m[mapstep]) CANNY_PUSH((m+mapstep), stack);
if (!m[mapstep+1]) CANNY_PUSH((m+mapstep+1), stack);
}

CV_TRACE_REGION_NEXT("finalPass");
// 转换 map 中的点,显示出边缘
parallel_for_(Range(0, src.rows), finalPass(map, dst), src.total()/(double)(1<<16));
}

主要函数为 parallelCanny(src, map, stack, low, high, aperture_size, L2gradient),源码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
if(needGradient)
{
if (aperture_size == 7)
{
scale = 1 / 16.0;
}
// 如果需要梯度,用 Sobel 算子计算
Sobel(src.rowRange(rowStart, rowEnd), dx, CV_16S, 1, 0, aperture_size, scale, 0, BORDER_REPLICATE);
Sobel(src.rowRange(rowStart, rowEnd), dy, CV_16S, 0, 1, aperture_size, scale, 0, BORDER_REPLICATE);
}
else
{
dx = src.rowRange(rowStart, rowEnd);
dy = src2.rowRange(rowStart, rowEnd);
}

计算强度和角度梯度,应用非最大抑制:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92

// 用值填充 map
// 0 - 像素可能属于边界
// 1 - 像素不属于边界
// 2 - 像素属于边界
for (int i = rowStart; i <= boundaries.end; ++i)
{
// 梯度计算

if(i < rowEnd)
{
// 计算下一行
_dx = dx.ptr<short>(i - rowStart);
_dy = dy.ptr<short>(i - rowStart);

// 使用 L2 梯度
if (L2gradient)
{
int j = 0, width = src.cols * cn;
// 省略 SIMD 相关代码
for ( ; j < width; ++j)
_mag_n[j] = int(_dx[j])*_dx[j] + int(_dy[j])*_dy[j];
}
else // 使用 L1 梯度
{
int j = 0, width = src.cols * cn;
for ( ; j < width; ++j)
_mag_n[j] = std::abs(int(_dx[j])) + std::abs(int(_dy[j]));
}
...
}
else
{
...
}

...

// 非极大值抑制
const int TG22 = 13573;
int j = 0;

for (; j < src.cols; j++)
{
int m = _mag_a[j];

if (m > low)
{
short xs = _dx[j];
short ys = _dy[j];
int x = (int)std::abs(xs);
int y = (int)std::abs(ys) << 15;

int tg22x = x * TG22;

// 水平方向梯度
if (y < tg22x)
{
if (m > _mag_a[j - 1] && m >= _mag_a[j + 1])
{
// 检查是否大于 high 阈值,是入栈且 pmap 设为 0
CANNY_CHECK(m, high, (_pmap+j), stack);
continue;
}
}
else
{
// 垂直方向梯度
int tg67x = tg22x + (x << 16);
if (y > tg67x)
{
if (m > _mag_p[j] && m >= _mag_n[j])
{
CANNY_CHECK(m, high, (_pmap+j), stack);
continue;
}
}
else
{
// 斜方向梯度
int s = (xs ^ ys) < 0 ? -1 : 1;
if(m > _mag_p[j - s] && m > _mag_n[j + s])
{
CANNY_CHECK(m, high, (_pmap+j), stack);
continue;
}
}
}
}
_pmap[j] = 1;
}
}

进行 egde track,即使用滞后阈值进行改进:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
// now track the edges (hysteresis thresholding)
CV_TRACE_REGION_NEXT("hysteresis");
while (!stack.empty())
{
uchar *m = stack.back();
stack.pop_back();
// 如果不是位于边界
if((unsigned)(m - pmapLower) < pmapDiff)
{
if (!m[-mapstep-1]) CANNY_PUSH((m-mapstep-1), stack);
if (!m[-mapstep]) CANNY_PUSH((m-mapstep), stack);
if (!m[-mapstep+1]) CANNY_PUSH((m-mapstep+1), stack);
if (!m[-1]) CANNY_PUSH((m-1), stack);
if (!m[1]) CANNY_PUSH((m+1), stack);
if (!m[mapstep-1]) CANNY_PUSH((m+mapstep-1), stack);
if (!m[mapstep]) CANNY_PUSH((m+mapstep), stack);
if (!m[mapstep+1]) CANNY_PUSH((m+mapstep+1), stack);
}
else
{
// 处理边界
borderPeaksLocal.push_back(m);
ptrdiff_t mapstep2 = m < pmapLower ? mapstep : -mapstep;

if (!m[-1]) CANNY_PUSH((m-1), stack);
if (!m[1]) CANNY_PUSH((m+1), stack);
if (!m[mapstep2-1]) CANNY_PUSH((m+mapstep2-1), stack);
if (!m[mapstep2]) CANNY_PUSH((m+mapstep2), stack);
if (!m[mapstep2+1]) CANNY_PUSH((m+mapstep2+1), stack);
}
}

最后一步,将边缘点(标记为2)映射为灰度值 255(白),其它映射到 0:

1
2
3
4
5
6
7
8
9
10
11
12
// the final pass, form the final image
for (int i = boundaries.start; i < boundaries.end; i++)
{
int j = 0;
uchar *pdst = dst.ptr<uchar>(i);
const uchar *pmap = map.ptr<uchar>(i + 1);
pmap += 1;
for (; j < dst.cols; j++)
{
pdst[j] = (uchar)-(pmap[j] >> 1);
}
}

>>1后只有 2 变为了 1,uchar - 1 变成了 255,其它的 0 和 1,则映射到了 0。这样就得到了边缘图像。

OTSU

彩色图像处理

OpenCV-Python

OpenCV 中,所有的算法都是通过 C++ 实现。但也可以通过其它的语言调用,如 Java,Python。绑定生成器使这成为可能。这些生成器在C ++和Python之间建立了桥梁,使用户能够从Python调用C ++函数。

Python 官方提供了将 C++ 扩展到 Python 的示例。但 OpenCV 在modules/python/src2中的一些脚本,根据 C++ 头文件自动生成包装器函数。

modules/python/CMakeFiles.txt是一个CMake脚本,用于检查要扩展到Python的模块。它将自动检查所有要扩展的模块并获取其头文件。这些头文件包含该特定模块的所有类,函数,常量等的列表。

其次,将这些头文件传递到Python脚本modules/python/src2/gen2.py。这是Python Binding生成器脚本。它调用另一个Python脚本module/python/src2/hdr_parser.py。这是标头解析器脚本。此标头解析器将完整的标头文件拆分为较小的Python列表。因此,这些列表包含有关特定函数,类等的所有详细信息。例如,将对一个函数进行解析以获取一个包含函数名称,返回类型,输入参数,参数类型等的列表。最终列表包含所有函数,枚举的详细信息,头文件中的structs,classs等。

因此头解析器将返回已解析函数的最终大列表。我们的生成器脚本(gen2.py)将为标头解析器解析的所有函数/类/枚举/结构创建包装函数(你可以在编译期间在build/modules/python/文件夹中以pyopencv_genic_*.h文件找到这些标头文件)。但是可能会有一些基本的OpenCV数据类型,例如Mat,Vec4i,Size。它们需要手动扩展。例如,Mat类型应扩展为Numpy数组,Size应扩展为两个整数的元组,等等。类似地,可能会有一些复杂的结构/类/函数等需要手动扩展。所有这些手动包装函数都放在modules/python/src2/cv2.cpp中。

所以现在剩下的就是这些包装文件的编译了,这给了我们cv2模块。因此,当你调用函数时,例如在Python中说res = equalizeHist(img1,img2),你将传递两个numpy数组,并期望另一个numpy数组作为输出。因此,将这些numpy数组转换为cv::Mat,然后在C++中调用equalizeHist()函数。最终结果将res转换回Numpy数组。简而言之,几乎所有操作都是在 C++ 中完成的,这给了我们几乎与C++相同的速度。

参考

OpenCV: OpenCV Tutorials

OpenCV: OpenCV modules

opencv 源码初探 | Little csd’s blog

OpenCV: How OpenCV-Python Bindings Works?

OpenCVTutorials/11_1_OpenCV-Python Bindings.md at master · fendouai/OpenCVTutorials

数字图像处理(第五版) 电子工业出版社

JDK1.8

概述

源码注释

Integer 类将基本类型 int 的值包装成一个对象。一个类型为 Integer 对象包含类型为 int 的单个域。

另外,这个类提供了许多将一个 int 转换为 StringString 转换为 int 的方法,还有其它的一些处理 int 时有用的常量和方法。

IntegerCache

Integer 类中有一个私有静态内部类—— IntegerCache

IntegerCache 的源码注释:

用于支持被 JLS 要求的值在 -128 到 127 (含) 的自动装箱对象辨识语义。

这个缓存在第一次使用时初始化。cache 的大小可以通过 -XX:AutoBoxCacheMax=<size> 来控制。在虚拟机初始化期间,java.lang.Integer.IntegerCache.high 属性可以被设置,存在 sun.misc.VM 类中私有的系统属性中。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
private static class IntegerCache {
static final int low = -128;
static final int high;
static final Integer cache[];

static {
// high value may be configured by property
int h = 127;
String integerCacheHighPropValue =
sun.misc.VM.getSavedProperty("java.lang.Integer.IntegerCache.high");
if (integerCacheHighPropValue != null) {
try {
int i = parseInt(integerCacheHighPropValue);
i = Math.max(i, 127);
// Maximum array size is Integer.MAX_VALUE
h = Math.min(i, Integer.MAX_VALUE - (-low) -1);
} catch( NumberFormatException nfe) {
// If the property cannot be parsed into an int, ignore it.
}
}
high = h;

cache = new Integer[(high - low) + 1];
int j = low;
for(int k = 0; k < cache.length; k++)
cache[k] = new Integer(j++);

// range [-128, 127] must be interned (JLS7 5.1.7)
assert IntegerCache.high >= 127;
}

private IntegerCache() {}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public static void main(String[] args) {
Integer a = new Integer(127);
Integer b = new Integer(127);
int c = new Integer(127);
int d = new Integer(127);
int e = new Integer(128);
int f = new Integer(128);
Integer g = 128;
Integer h = 128;
Integer i = 127;
Integer j = 127;
Integer k = 1;
Integer l = 2;
Integer m = 3;
Long n = 3L;
System.out.println(a == b); // false
System.out.println(a.equals(b)); // true
System.out.println(c == d); // true
System.out.println(e == f); // true
System.out.println(g == h); // false
System.out.println(i == j); // true
System.out.println(m == (k + l)); // true
System.out.println(m.equals(k + l)); // true
System.out.println(n == (k + l)); // true
System.out.println(n.equals(a + b)); // false
}
1
javap -c Test

输出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
Compiled from "Test.java"
public class stack.Test {
public stack.Test();
Code:
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: return

public static void main(java.lang.String[]);
Code:
0: new #2 // class java/lang/Integer
3: dup
4: bipush 127
6: invokespecial #3 // Method java/lang/Integer."<init>":(I)V
9: astore_1
10: new #2 // class java/lang/Integer
13: dup
14: bipush 127
16: invokespecial #3 // Method java/lang/Integer."<init>":(I)V
19: astore_2
20: new #2 // class java/lang/Integer
23: dup
24: bipush 127
26: invokespecial #3 // Method java/lang/Integer."<init>":(I)V
29: invokevirtual #4 // Method java/lang/Integer.intValue:()I
32: istore_3
33: new #2 // class java/lang/Integer
36: dup
37: bipush 127
39: invokespecial #3 // Method java/lang/Integer."<init>":(I)V
42: invokevirtual #4 // Method java/lang/Integer.intValue:()I
45: istore 4
47: new #2 // class java/lang/Integer
50: dup
51: sipush 128
54: invokespecial #3 // Method java/lang/Integer."<init>":(I)V
57: invokevirtual #4 // Method java/lang/Integer.intValue:()I
60: istore 5
62: new #2 // class java/lang/Integer
65: dup
66: sipush 128
69: invokespecial #3 // Method java/lang/Integer."<init>":(I)V
72: invokevirtual #4 // Method java/lang/Integer.intValue:()I
75: istore 6
77: sipush 128
80: invokestatic #5 // Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer;
83: astore 7
85: sipush 128
88: invokestatic #5 // Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer;
91: astore 8
93: bipush 127
95: invokestatic #5 // Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer;
98: astore 9
100: bipush 127
102: invokestatic #5 // Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer;
105: astore 10
107: getstatic #6 // Field java/lang/System.out:Ljava/io/PrintStream;
110: aload_1
111: aload_2
112: if_acmpne 119
115: iconst_1
116: goto 120
119: iconst_0
120: invokevirtual #7 // Method java/io/PrintStream.println:(Z)V
123: getstatic #6 // Field java/lang/System.out:Ljava/io/PrintStream;
126: aload_1
127: aload_2
128: invokevirtual #8 // Method java/lang/Integer.equals:(Ljava/lang/Object;)Z
131: invokevirtual #7 // Method java/io/PrintStream.println:(Z)V
134: getstatic #6 // Field java/lang/System.out:Ljava/io/PrintStream;
137: iload_3
138: iload 4
140: if_icmpne 147
143: iconst_1
144: goto 148
147: iconst_0
148: invokevirtual #7 // Method java/io/PrintStream.println:(Z)V
151: getstatic #6 // Field java/lang/System.out:Ljava/io/PrintStream;
154: iload 5
156: iload 6
158: if_icmpne 165
161: iconst_1
162: goto 166
165: iconst_0
166: invokevirtual #7 // Method java/io/PrintStream.println:(Z)V
169: getstatic #6 // Field java/lang/System.out:Ljava/io/PrintStream;
172: aload 7
174: aload 8
176: if_acmpne 183
179: iconst_1
180: goto 184
183: iconst_0
184: invokevirtual #7 // Method java/io/PrintStream.println:(Z)V
187: getstatic #6 // Field java/lang/System.out:Ljava/io/PrintStream;
190: aload 9
192: aload 10
194: if_acmpne 201
197: iconst_1
198: goto 202
201: iconst_0
202: invokevirtual #7 // Method java/io/PrintStream.println:(Z)V
205: return
}

可以看到后面 4 个都调用了 Integer#valueOf 方法。

包装类的 “==” 在不遇到算术运算的时候不会自动拆箱,包装类的 equals 方法不处理数据转型。

valueOf

返回一个代表指定 int 值的 Integer 实例。如果不需要一个新的 Integer 实例,通常应该优先使用本方法,而不是使用构造函数 Integer(int),因为本方法通过缓存频繁请求的值,可能会产生更好的空间和时间性能。本方法将始终缓存 -128 到 127 (含)范围内的值,并可能缓存这个范围之外的其他值。

1
2
3
4
5
public static Integer valueOf(int i) {
if (i >= IntegerCache.low && i <= IntegerCache.high)
return IntegerCache.cache[i + (-IntegerCache.low)];
return new Integer(i);
}

运算器实验

实验思路 是如何从头构建起这个电路并使其正常⼯作的

遇到的问题,如何解决。

8 位串行可控加减法器

image-20201210094427905

 对于加减法的控制,你是采取何种⽅式实现的?
由于补码的性质,减法可以通过加法实现,只需将减数的补码再次求补送入加法器即可实现减法运算。因此,引入 Sub 信号,当 Sub 为 0 时,送入加法器的是 Y 本身,此时实现加法操作;当 Sub 为 1 时,通过异或门取反,送入加法器的是 Y 的反码,且 Sub 也送入了加法器最低位的进位输入,即对 Y 实现了取反加一的操作,完成了求补过程,从而完成减法操作。

 你是怎样实现溢出检测的?其数学原理是什么?

利用最高数据位的进位与符号位的进位是否一致进行检测。

首先,只有两个符号位相同的数相加才可能发生溢出,那我们就可以根据操作数和运算结果的符号位是否一致进行判断。两个符号位为 0 的数,相加的结果符号位变成了 1 ,表示发生了溢出。同理,两个符号位为 1 的数,相加的结果符号位变成了 0 ,表示发生了溢出。

进而,也可以根据最高数据位的进位与符号位的进位是否一致进行检测,不同则为溢出。两个符号位为 0 的数,符号位进位则为 0 ,若此时最高数据位进位为 1,那最终的运算结果符号位就为 1,如前段所述,表明发生了溢出。两个符号位为 1 的数,符号位进位则为 1,若此时最高数据位进位为 0,那最终运算结果的符号位就为 0,如前,表明发生了溢出。

可以通过一个异或门来实现。

四位先行进位电路 CLA74182

image-20201210101949695

 CLA74182的作⽤是什么?

实现先行进位。提前得到所有全加器所需的进位信号。串行运算高位运算需要等待低位的运算。与串行加法器相比,可以提高运算速度。

4 位快速加法器

image-20201213215650662

 快速加法器与普通的串⾏加法器的区别是什么?他是通过什么⽅法来实现“快速”的?

通过 CLA74182 提前得到所有全加器所需的进位信号。这样高位就不需要等待低位的进位数据,所有加法器可以并行运算,提高运算性能,实现“快速”。

16 位快速加法器

image-20201211194738742

32 位快速加法器

image-20201213215746865

 你的溢出检测是如何实现的?这⾥是否可以选择其他的溢出检测⽅法?

根据最高数据位进位和符号位进位是否一致进行判断。

还可以选用根据操作数和运算结果的符号位是否一致来进行检测。

在有符号运算加法中,只有两个符号相同的数相加时才有可能产生溢出,因此,可以根据操作数与运算结果的符号位是否一致来进行检测

设Xf,Yf分别为两个操作数的符号位,Sf为结果的符号位,V为溢出标志位,V=1时即表示溢出,那么就有逻辑表达式:

image-20201213152614985

这个逻辑表达式表明,有符号加法运算溢出的条件是:两个操作数都是正数结果却为负数,或者 两个运算数都是负数结果却是正数。

根据这个表达式,利用AND gate,OR gate,NOT gate可以容易地构造出相应的溢出检测电路。

 实现此电路有哪些值得注意的地⽅?

可以有多种画图方式。

ALU

image-20201213215817333

image-20201213215829912

image-20201213215840122

 怎样实现功能选择?

通过数据选择器,选择作为输出的输入。

 依次介绍各个功能的实现⽅式

逻辑左移通过 Logisim 自带的移位器,设置数据位宽为 32 位,移位类型为 逻辑左。

算术右移通过 Logisim 自带的移位器,设置数据位宽为 32 位,移位类型为 运算右。

逻辑右移通过 Logisim 自带的移位器,设置数据位宽为 32 位,移位类型为 逻辑右。

有符号乘通过自带的乘法器实现。

无符号除通过自带的除法器实现。

加法通过上面实现的 32 位快速加法器。

有符号加法溢出用 32 位的溢出直接连接。

有符号减法溢出用 32 位的溢出直接连接。

无符号加法溢出根据加数和小于加法来判断。

无符号减法溢出根据减法差大于被减数判断。

减法转化为加法,对 Y 用一个非门先取反,Cin 输入 1,即再加一,得到补码。

按位与、按位或、按位异或、按位或非用自带的逻辑门实现。

符号比较通过比较器实现,设置数字类型为关于 2 的补码。

无符号比较通过比较器实现,设置数字类型为无符号。

Equal 比较器实现。

 重点介绍OF,UOF,加法,减法

 运⾏运算器测试电路检测

存储器实验

结合⾃⼰实现的电路回答后⾯列出的问题

对于⽤到的重要器件不能⼀句带过,需要解释其在电路中的作⽤以及不同情景下的表现⾏为

提出⾃⼰在实验中碰到的问题以及如何解决的(加分项)

存储扩展

image-20201211195220796

image-20201211195257056

简要介绍
该电路的作⽤,以及各个引脚的含义

根据输入的区号和位号,得到对应的汉字字形码,即D0 和 D7共 256 位数据。

D0 - D7 每个对应字形码的 32 位。

为什么该实验需要进⾏存储扩展?

数据线是 32 位,所以两个 4K*16 位的芯片需要进行位扩展。

地址线是 14 位,所以 4K*32 位的芯片要进行字扩展。

实验中存储器数据和 LED 点阵码是如何对应的?

一位对应 LED 点阵上的一个点。

详细介绍
实验中存储器扩展的基本思路以及具体实现

两个 4K*16 位存储芯片先进行位扩展得到 4K*32位的芯片,然后再和其余 3 个 4K*32 位的芯片进行字扩展得到 16K*32 位的芯片。

具体实现通过 分离器 将 16 位芯片的输出数据组合再一起,实现位扩展。

通过解码器得到片选信号选择具体芯片对应地址的数据。

对于该实验,在进⾏字位扩展过程中需要注意什么?(提⽰:地址空间)

对于电路图中的3个4K32位ROM和2个4K16位ROM,由于数据已经导入到ROM中,所以顺序固定为从左到右地址增加,即在进行字扩展时,注意地址区间的划分。也就是左边第一个芯片连到输出 0。

寄存器文件设计

image-20201211195403794

简要介绍

该电路的作⽤,以及各个引脚的含义

实现 MIPS 的寄存器组,包含 4 个寄存器。

-分别通过reg1# reg2# write_reg#获得输入的三个寄存器编号,因为实验只要求实现0-3号寄存器,所以只需提取5位编号输入的低2位(00/01/10/11)

-通过 write_data 获取要写入寄存器的32位数据

-通过 WE 获取写入标志

-通过 clk 获取时钟信号

-将读寄存器1的值输出到 reg1

-将读寄存器2的值输出到 reg2

-将4个寄存器当前保存的值输出到 $0 $1 $2 $3,用于观测。

详细介绍

如何实现寄存器选择输出?

用数据选择器把四个寄存器的数据输出端口作为输入,reg1接到数据选择器的输出端口,reg1# 控制数据选择器选择作为输出的输入。

如何实现寄存器选择写⼊?

将寄存器的使能端连到解码器上,write_reg# 信号控制解码器的哪个输出为 1。解码器输出再与 WE 信号相与。当解码器输出为 1 且 WE 信号为 1 时,将 write_data 中的数据写到 寄存器中。 第一个寄存器输入数据端口连接 常量 0,实现 0 号寄存器恒零。

出错可能原因:

  1. 调整了磁盘分区
  2. 升级了 Arch
  3. 升级了 Windows10
  4. 安装了黑苹果

错误表现及修复

grub 引导时会报

1
2
3
error: file '/grub/x86_64-efi/normal.mod' not found.
Entering rescue mode...
...

修复这个错误:
ls

ls (hd0,gpt8)/grub

set root=(hd0,gpt8)
set prefix=(hd0,gpt8)/grub

insmod normal

normal

grub 修复至此完成部分,下面又遇到一个新的错误

启动 Arch Linux 时,报错:

1
2
3
[FAILED] Failed to mount /boot.
[DEPEND] Dependency failed for Local File Systems.
...

这里会提示你输入journalctl -xb查询系统日志。

查找日志,找到报错的地方一般很容易解问题。

面向对象六大原则

  • 单一职责 Single Responsibility Principle

    单一职责的定义就是一个类应该只有一个引起它变化的原因,也就是一个类应该是一组相关性很高的函数、数据的封装。划分界限不是死的,大都依靠经验而定。

  • 开闭原则 Open Close Principle

    定义:软件中的对象(类、模板、函数等)应该对应扩展是开放的,对于修改是封闭的。主要通过继承和接口来实现。开闭原则可以使程序更加灵活和稳定。完全符合开闭原则是理想化的状态。

  • 里式替换 Liskov Substitution Principle

    第一定义:如果对每一个类型为S的对象O1,都有类型为T的对象O2,使得以T定义的所有程序P在所有对象O1都代换成O2时,程序P的行为没有发生变化,那么S就是类型T的子类。咋一看有点不好理解。那看第二定义:所有引用基类的地方必须能透明的使用其子类的对象。也就是说,只要父类能出现的地方,子类就能出现,而且替换为子类也不会产生任何错误或异常。实现里式替换的核心原理是抽象。抽象的实现又依赖于继承。

    继承的优缺点都很明显:

    优点:

    ​ 代码重用,减少创建类的成本(少写些代码?),每个子类都拥有父类的方法和属性;

    ​ 子类与父类基本相似,但又与父类有所区别;

    ​ 提高代码的可扩展性。

    缺点:

    ​ 继承是侵入式的,只要继承就必须拥有父类的所有属性和方法;

    ​ 可能造成子类代码冗余、灵活性降低

  • 依赖倒置 Dependence Inversion Principle

    依赖倒置指代了一种特定的解耦形式,使得高层次的模块不依赖于低层次模块的实现细节(?)。

    关键点:

    ​ 高层模板不应该依赖底层模板,应该依赖其抽象;

    ​ 抽象不应该依赖细节;

    ​ 细节应该依赖抽象。

    依赖倒置在Java中的表现:模板间的依赖通过抽象产生,实现类之间不发生直接的依赖关系。

  • 接口隔离 Interface Segregation Principle

    第一定义:客户端不应该依赖它不需要的接口。第二定义:类间的依赖关系应该建立在最小的接口上。

  • 迪米特原则 Law of Demeter

    也被称为最少知识原则,其定义为:一个对象应该对其他对象有最少的了解。这样可以降低耦合度,将当一个类发生改变的时候对其他类的影响降至最小。

备忘录模式

定义:在不破坏封闭的前提下,捕获一个对象的内部状态,并在该对象之处保存这个状态,这样,以后就可将该对象恢复到原先保存的状态。

UMLofMemento

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package memento;

public class Game {
private int mCheckpoint = 1;
private int mLifeValue = 100;
private String mWeapon = "Lightsaber";

void play() {
System.out.println("playing " + String.format("level%d", mCheckpoint));
mLifeValue -= 10;
System.out.println("next level");
mCheckpoint++;
System.out.println("reach " + String.format("level%d", mCheckpoint));
}

void quit() {
System.out.println("attribute:"+this.toString());
System.out.println("quit");
}

Memento createMemento() {
Memento memento = new Memento();
memento.mCheckpoint=mCheckpoint;
memento.mLifeValue = mLifeValue;
memento.mWeapon=mWeapon;
return memento;
}

void restore(Memento memento) {
this.mCheckpoint=memento.mCheckpoint;
this.mLifeValue =memento.mLifeValue;
this.mWeapon=memento.mWeapon;
System.out.println("restores attribute:"+this.toString());
}

@Override
public String toString() {
return "Checkpoint " + mCheckpoint + " LifeValue " + mLifeValue + " Weapon " + mWeapon;
}
}

单例模式

单例:保证一个类仅有一个实例,并提供一个访问它的全局访问点。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
public class LogUtil {

private static LogUtil sLogUtil;
public final int DEGUB = 0;
public final int INFO = 1;
public final int ERROR = 2;
public final int NOTHING = 3;
public int level = DEGUB;

private LogUtil() {
}

// 双重锁定(DCL)(Double-Check Locking)
public static LogUtil getInstance() {
if (sLogUtil == null) {
synchronized (LogUtil.class) {
if (sLogUtil == null) {
sLogUtil = new LogUtil();
}
}
}
return sLogUtil;
}

public void debug(String msg) {
if (DEGUB >= level) {
System.out.println(msg);
}
}

public void info(String msg) {
if (INFO >= level) {
System.out.println(msg);
}
}

public void error(String msg) {
if (ERROR >= level) {
System.out.println(msg);
}
}

}

只有在sLogUtil还没被初始化的时候才会进入到第3行,然后加上同步锁。等sLogUtil一但初始化完成了,就再也走不到第3行了,这样执行getInstance()也不会再受到同步锁的影响,效率上会有一定的提升。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
private static volatile Singleton instance;

private Singleton() {}

// 双重锁定(Double-Check Locking)
public static Singleton getInstance() {
if (instance == null) {
synchronized (Singleton.class) {
if (instance == null) {
instance = new Singleton();
}
}
}
return instance;
}

更多创建方法

第 1 种方式:

publci 静态成员是 final 域

1
2
3
4
public class Elvis {
public static final Elvis INSTANCE = new Elvis();
private Elvis() {}
}

风险是可以通过反射调用私有构造方法,要抵御可以在构造方法中判断,创建第二个实例时抛出异常。

第 2 种方式:

public 的是静态工厂方法

1
2
3
4
5
6
7
public class Elvis {
private static final Elvis INSTANCE = new Elvis();
private Elvis() {}
public static Elvis getInstance() {
return INSTANCE;
}
}

有同样的风险。

同时这两种方法还有反序列化的问题。解决需要将所有实例域声明为 transient,并提供 readResolve 方法:

1
2
3
private Object readResolve() {
return INSTANCE;
}

上面两种又称为饿汉模式。

第 3 种,通过枚举实现单例:

1
2
3
4
5
6
public enum Elvis {
INSTANCE;
public void someMethod() {

}
}

没有反射和序列化风险。

第 4 种,懒汉模式

1
2
3
4
5
6
7
8
9
10
class Singleton {
private static final Singleton INSTANCE;
private Singleton() {}
public static Singleton getInstance() {
if (INSTANCE == null) {
INSTANCW = new Singleton();
}
return INSTANCE;
}
}

线程不安全。

第 5 种,懒汉模式、线程安全

1
2
3
4
5
6
7
8
9
10
class Singleton {
private static final Singleton INSTANCE;
private Singleton() {}
public static synchronized Singleton getInstance() {
if (INSTANCE == null) {
INSTANCE = new Singleton();
}
return INSTANCE;
}
}

效率低。

第 6 种,双重检查 DCL

1
2
3
4
5
6
7
8
9
10
11
12
13
14
class Singleton {
private static final Singleton INSTANCE;
private Singleton() {}
public static Singleton getInstance() {
if (INSTANCE == null) {
synchronized(Singleton.class) {
if (INSTANCE == null) {
INSTANCE = new Singleton();
}
}
}
return INSTANCE;
}
}

如果单例已经创建,则不再进行同步。

存在 DCL 失效的问题。

INSTANCE = new Singleton();

可以分为三个步骤:

给 Singleton 实例分配内存空间;

调用构造方法,初始化成员变量;

将 INSTANCE 对象执行分配的内存空间(此时 INSTANCE 就非 null 了)。

因为 JVM 的指令重排序,可以 1 - 3 - 2 这样执行,就可能造成线程读取到的是一个还未初始化的实例,造成 DCL 失效。

解决是加上 volatile 关键字。

更多方式

观察者模式

代理模式

为其他对象提供一种代理以控制对这个对象的访问。[DP]

静态代理

提供一个代理对象,代理对象持有对真实对象的一个引用,在代理对象的方法中调用真实对象的方法实现代理。

动态代理

需要实现 InvocationHandler 接口。

动态代理可以实现 AOP,可以在不改动已有代码结构的情况下增强或控制对象的行为。

享元模式

运用共享技术有效地支持大量细粒度的对象。

Android 中的 Message。

JVM 组成

  1. 类加载器
  2. 运行时数据区
  3. 执行引擎
  4. 本地库接口

Java 内存区域

Java 虚拟机运行时将内存分为 5 个区域:

  1. 程序计数器

    线程私有。通过这个计数器的值来选取下一个执行的字节码指令。不会 OutOfMemoryError。

  2. Java 虚拟机栈

    线程私有。每个方法在执行的时候都会创建一个栈帧,用于存储局部变量表、操作数栈、动态链接、方法出口信息等。每一个方法从调用到执行完成的过程,就对应一个栈帧从入栈到出栈的过程。如果调用栈的深度超过了所允许的最大深度,会抛出 StackOverflowError。如果虚拟栈扩展时无法申请到足够的内存,会抛出 OutOfMemoryError。

  3. 本地方法栈

    本地方法栈为 Native 方法服务。StackOverflowError OutOfMemoryError

  4. Java 堆

    所有线程共享,用于存储对象实例,也是垃圾收集的重点对象。

    可以再细分为新生代和老年代,再细可以分成 Eden 空间和 From Survivor 空间和 To Survivor 空间。OutOfMemoryError

  5. 方法区

    所有线程共享。存放加载的类信息、常量、静态变量、JIT 编译器编译后的代码等。OutOfMemoryError

垃圾收集与内存分配

垃圾判定算法

  1. 引用计数算法

    给对象添加一个引用计数器。每当有一个地方引用到了就将计数器值加 1,当引用失效时就将计数器值减 1。如果对象的计数器值为 0,就是不可能再被使用的。但很难解决对象间互相引用的问题。

  2. 可达性分析算法

    从 GC Roots 出发,向下搜索,当一个对象到 GC Roots 没有引用链相连时,则判定为可回收。

可作为 GC Roots 的对象:

  1. Java 虚拟机栈(栈帧中的本地变量表)中引用的对象
  2. 方法区中 类 静态属性引用的对象
  3. 方法区中常量引用的对象
  4. 本地方法区中 JNI(native 方法)引用的对象

引用的类型

Java 1.2 后对引用的概念进行了扩充。

  1. 强引用

    new 出来的对象。

  2. 软引用 SoftReference

    有用但非必需的对象。将要内存溢出异常之前会对这些对象列入回收对象进行回收,若这次收集后还没有足够的内存,才会抛出内存异常。

  3. 弱引用 WeakReference

    只能生存到下次回收之前

  4. 虚引用 PhantomReference

    唯一目的能在这个对象被收集器回收时收到一个系统通知。

垃圾收集算法

  1. 标记-清除算法

    分为标记和清理两个阶段。首先标记出所有需要回收的对象,在标记完成后统一清理回收。

    效率问题,标记和清除的效率都不高。空间问题,会产生不连续的内存碎片。

  2. 标记-复制算法

    将内存划分为几部分。同时只使用其中的部分,当一部分的内存用完后,将仍然存活的对象复制到还没使用的部分。

  3. 标记-整理算法

    先标记,然后将存活的对象都向一端移动,然后直接清理掉端边界外的内存。

分代收集:根据对象存活周期的不同,将内存分为几块。新生代使用复制算法,老年代使用标记-清除或标记-整理算法。

垃圾收集器

  1. Serial 收集器

    单线程,垃圾收集时必须暂停其它所有的工作线程。

    简单而高效。

  2. PreNew 收集器

    Serial 收集器的多线程版本。

  3. Parallel Scavenge

    达到一个可控制的吞吐量—— CPU 用于用户代码的时间和 CPU 总消耗时间的比值。

  4. Serial Old

    Serial 收集器的老年代版本。

  5. Parallel Old

    Parallel Scavenge 收集器的老年代版本。

  6. CMS Concurrent Mark Sweep

    以获取最短回收停顿时间为目标的收集器。

    运行过程分为 4 个步骤:

    初始标记

    并发标记

    重新标记

    并发清理

    缺点:

    对 CPU 资源非常敏感。

    无法处理浮动垃圾。

    CMS 基于标记-清除,可能产生内存碎片,为大对象的内存分配带来麻烦。

  7. G1

    运行过程:

    初始标记

    并发标记

    最终标记

    筛选回收

    停顿时间模型的收集器,支持指定在一个长度为 M 毫秒的时间片段内,消耗在垃圾收集上的时间大概率不超过 N 毫秒。

    G1 收集器使用 Mixed GC 模式——垃圾收集的衡量标准不再是 TA 属于哪个分代,而是哪块内存中存放的垃圾数量最多,回收收益最大。实现这个目标的关键是 G1 开创的基于 Region 的堆内存布局。

    G1 为每个 Region 设计了两个名为 TAMS(Top at Mark Set) 的指针,把 Region 中的一部分内存划分出来用于并发回收过程中的新对象分配,并发回收时新分配的对象地址都必须要在这两个指针位置上。

    G1 从整体看来是基于“标记-整理”算法实现的收集器,但从局部(两个 Region 之间)上看又是基于“标记-复制”算法实现,都不会产生内存碎片。

    缺点:内存占用和额外执行负载都要比 CMS 高。

Class 类文件

类加载

类加载的时机

有且只有 6 种情况:

  1. 遇到 new、getstatic、putstatic、invokestatic 这 4 条字节码。使用 new 关键字实例化对象、读取一个类的静态字段、设置一个类的静态字段、调用一个类的静态方法。
  2. 使用java.util.reflect包中的方法对类进行反射调用时,且类没有进行过初始化。
  3. 当初始化一个类时,如果父类没有进行过初始化,则触发父类的初始化。
  4. 虚拟机启动时,初始化主类。
  5. 使用 JDK 7 新加入的动态语言支持时,如果一个 java.lang.invoke.MethodHandle 实例最后的解析结果为 REF_getStatic REF_putStatic REF_invokeStatic REF_newInvokeSpecial 四种类型的方法句柄,并且这个方法句柄对应的类没有初始化,则需要先触发其初始化。
  6. 当一个接口中定义了 JDK 8 中新加入的 default 方法时,如果有这个接口的实现类发生了初始化,则接口要在其之前被初始化。

这 6 中对一个类型进行主动引用的情况。

类加载的过程

  1. 加载
  2. 验证
  3. 准备
  4. 解析
  5. 初始化

类加载器

双亲委派模型

类加载器可以分为 3 种:

  • 启动类加载器 (Bootstrap ClassLoader)

    负责将存放在 \lib 目录中的,或被 -Xbootclasspath 参数指定的,并且是被虚拟机识别的(如 rt.jar)的类库加载到虚拟机内存中。

  • 扩展类加载器 (Extension ClassLoader)

    \lib\ext 或 java.ext.dirs 系统变量指定的路径中的所有类库。

  • 应用程序类加载器 (Application ClassLoader)

    加载用户类路径(ClassPath)上指定的类库。

  • 自定义类加载器

双亲委派模型:类加载器之间具有层次关系,称为类加载器的双亲委派模型。

双亲委派模型要求除了顶层的启动类加载之外,都要有自己的父类加载器。类加载器之间的父子关系一般由组合实现。双亲委派模型的工作流程,当一个类加载收到一个类加载的请求时,会先将请求委派给父类加载器完成,会层层上传。只有父加载器反馈无法完成这个加载请求时(即无法在路径中搜索到所需的类),子加载器才会尝试自己加载。

双亲委派模型的优点:Java 类随着 TA 的类加载器一起具备了一种带有优先级的层次关系。例如 java.lang.Object 类在 rt.jar 中,是委派给顶层的启动类加载器完成的,因此 Object 类在各种类加载过程中都是同一个类。

实现原理:在 java.lang.ClassLoader#loadClass() 方法中。先检查类是否已经被加载过,如果加载过就直接返回已加载的类。否则调用父加载器的 loadClass() 方法,如果父加载器为 null,则使用启动加载器加载,如果还返回 null,才会调用本身的类加载器完成加载请求。

Java 语法糖

Java 内存模型与线程

Java 内存模型的主要目标是定义程序中各个变量的访问规则,即如何将变量存储到内存,如何将变量从内存中取出。

Java 内存模型规定了所有变量都存储在主内存(可类比机器物理硬件的主内存)。每条线程还有自己的工作内存(可类比处理器高速缓存),工作内存保存了该线程用到的主内存变量拷贝副本。线程对变量的读取、赋值等发生在工作内存。要先在工作内存中赋值,再写回主内存。不同线程之间的变量值的访问传递也要通过主内存为中介完成。

实现细节,定义了 8 种原子操作:

lock

unlock

read

load

use

assign

store

write

把一个变量从主内存复制到工作内存,需顺序执行 read load。

八种原子操作必须满足规则:

不允许 read loal、store write 单独出现,即不允许一个变量从主内存读取后不被工作内存接受,不允许一个变量从工作内存发起了写回而主内存不接受。

不允许一个线程丢弃 TA 最近的 assign 操作,即变量在工作内存中改变了必须要同步回主内存。

不允许一个线程在没有发生 assign 操作时,将数据从工作内存同步回主内存。

一个新的变量必须在主内存中“诞生”,不允许在工作内存中直接使用一个未被初始化的(load 或 assign)的变量,即对一个变量实施 use 、store 操作之前,必要要先执行 assign 和 load 操作。

一个变量必须在同一个时刻只能有一个线程对其进行 lock 操作,但 lock 操作可以被一个线程执行多次,多次执行 lock 后,必须执行相同次数的 unlock 才能解锁。

如果对一个变量执行 lock 操作,会清空工作内存中此变量的值,在执行引擎使用这个值前,必须重新执行 load 或 assign 操作以初始化变量的值。

如果一个变量没有执行过 lock 操作,就不能对其 unlock。也不能 unlock 一个被其它线程锁定的变量。

对一个变量 unlock 前,必须把此变量同步回主内存(执行 store、write)操作。

volatile

第一:保证了变量对所有线程的可见性。即当一个线程修改了这个变量的值,其它线程是立即可以得知的。

第二:禁止指令重排序。

有 volatile 修饰的变量,赋值后多了一个 lock addl $0x0, (%esp) 操作,操作相当于一个内存屏障,重排序时不能把内存屏障后的指令排序到内存屏障之前。

先行发生原则 happens-before

与上面规则等效的判定原则:

程序次序规则

管程锁定原则

volatile 变量规则:对一个 volatile 变量的写操作先行发生于后面对这个变量的读操作。

线程启动规则

线程终止原则

线程中断原则

对象终结原则

传递性

Java 线程

线程安全与锁优化

线程安全的实现方法

互斥同步(阻塞同步)

使用 synchronized 关键字

使用 ReentrantLock

等待可中断

公平锁

锁可以绑定多个条件

非阻塞同步

基于冲突检测的乐观并发策略,需要硬件支持操作和冲突检测是原子的。

不可变

锁优化

自旋锁 自适应自旋锁

互斥同步对性能的最大影响是阻塞的实现,挂起和恢复线程都要陷入内核态。如果物理机器有多个 CPU 或者多个 CPU 核心,可以允许两个或多个线程同时并发运行,那么可以让后面请求锁的线程自旋等待一会儿,但不放弃处理器的执行时间,看持有锁的线程是否会很快释放锁。

自适应:根据具体情况,由上一次在同一个锁上的自旋时间和持有锁的线程的状态决定自旋等待的时间。

锁消除

锁粗化

轻量级锁

偏向锁

参考

Garbage Collection Roots

深入理解 Java 虚拟机:JVM 高级特性与最佳实践 / 周志明著 .——2 版.——北京:机械工业出版社,2013.6

经典面试题|讲一讲JVM的组成

Pacman

使用

安装指定包

1
sudo pacman -S package_name

查看哪些包属于指定包组

1
pacman -Sq plasma

安装一个本地包

1
sudo pacman -U /path/to/package/package_name-version.pkg.tar.xz

移除指定包

移除指定包,保留其依赖。

1
sudo pacman -R package_naem

移除指定包及其不被其他包依赖的依赖。

1
sudo pacman -Rs package_name

升级包

pacman 不支持部分升级。

查询包

查询在数据库中的包,会搜索包名和描述。

1
sudo pacman -Ss string

查询已安装的包。

1
sudo pacman -Qs string

显示软件包的依赖树

需要先安装pacman-contrib

1
pactree git

定期清理软件包缓存

需要先安装pacman-contrib

1
paccache -r

设置保留几个最近的版本:

1
paccache -rk1

配置

配置文件位于/etc/pacman.conf

镜像

/etc/pacman.d/mirrorlist/

报错

“Failed to commit transaction (conflicting files)” 错误

error: could not prepare transaction
error: failed to commit transaction (conflicting files)
package: /usr/lib/node_modules/node-gyp/.github/workflows/tests.yml exists in filesystem
Errors occurred, no packages were upgraded.

https://wiki.archlinux.org/index.php/Pacman_(%E7%AE%80%E4%BD%93%E4%B8%AD%E6%96%87)#%22Failed_to_commit_transaction_(conflicting_files)%22_%E9%94%99%E8%AF%AF

官方有解决方案,不过因为我这是 node_modules 的问题。所以我选择:
sudo npm uninstall node-gyp -g

再 sudo pacman -Syu
但遇到了 wanring
warning: could not get file information for usr/lib/node_modules/node-gyp/node_modules/safer-buffer/Porting-Buffer.md
warning: could not get file information for usr/lib/node_modules/node-gyp/node_modules/safer-buffer/Readme.md
warning: could not get file information for usr/lib/node_modules/node-gyp/node_modules/safer-buffer/dangerous.js
warning: could not get file information for usr/lib/node_modules/node-gyp/node_modules/safer-buffer/package.json


warning: could not get file information for usr/lib/node_modules/node-gyp/test/test-install.js
warning: could not get file information for usr/lib/node_modules/node-gyp/test/test-options.js
warning: could not get file information for usr/lib/node_modules/node-gyp/test/test-process-release.js
warning: could not get file information for usr/lib/node_modules/node-gyp/update-gyp.py

但也成功安装了,谨此记录。

参考

pacman - ArchWiki

安装 wine

Wine - ArchWiki

1
sudo vim /etc/pacman.conf

取消[multilib]及下面一行的注释。

1
sudo pacman -Sy wine

安装 QQ音乐

1
yay -S deepin.com.qq.qqmusic

Troubleshooting

Deepin-wine applications fails to start

Deepin-wine - ArchWiki

1
sudo pacman -S xsettingsd

/usr/bin/xsettingsd设为自启动。

on KDE:

Plasma can autostart applications and run scripts on startup and shutdown. To autostart an application, navigate to System Settings > Startup and Shutdown > Autostart and add the program or shell script of your choice.

类似安装微信等。

完。

2020-09-12 23:45:26

Chap 2 计算机中数据的表示方法

真值与机器码

将用“+”、“-”表示正负的二进制数称为符号数的真值。
把将符号与数值一起编码表示的二进制数称为机器码。
机器码种类:

  • 原码
  • 反码
  • 补码
    补码:
    符号位:0-正,1-负。
    数值位:正数补码与真值相同,负数补码数值位是真值按位取反,并在最后一位加1。

奇偶校验码
海明码
冗余校验码

Chap 3 计算方法与加法器

定点数的除法运算

手工除法运算

除法通过减法实现:比较余数与除数右移i位的大小来决定商1还是0。

Unix网络编程

HTTP权威指南

第一行代码 第2版

第一行代码 第3版

Computer Organization And Design e

The Linux Programming interface e

TCP/IP 详解 e

TCP

TCP 负责足够快地发送数据报,又不能引起网络拥塞。TCP 超时后要重传没有递交的数据报。还要把错序的数据报重新装配成正确的顺序。

TCP 服务由发送端和接收端创建一种称为套接字(socket)的端点来获得。每个 socket 都有一个编号,由主机的 IP 地址加主机的 16 位数值组成。16 位数值称为端口(port)。为获得 TCP服务,必须显式的在两台机器的套接字之间建立连接。

TCP 连接是全双工的,不支持组播和广播。

TCP 连接上的每个字节都有 TA 自己独有的 32 位序号。

TCP 实体使用的基本协议是具有动态窗口大小的滑动窗口协议。确认号的值等于接收端期望接收的下一个序号。

TCP 段头结构

image-20210304160457963

ACK 位表明 Acknowledgment number 是有效的。

SYN (Synchronize sequence numbers)

FIN

每个选项(Options)具有类型-长度-值(Type-Length-Value)编码。

TCP 拥塞控制

当网络的负载超过处理能力,就会产生拥塞。当路由器上的队列增长到很大时,网络层检测到拥塞,并试图通过丢弃数据包来管理拥塞。传输层接收到网络层传来的拥塞信息,也会减慢 TA 发送到网络的流量速率。

TCP 维护一个拥塞窗口(congestion window),窗口大小是任何时候发送端可以向网络发送而字节数。

流量控制窗口指出了接收端可以缓冲的字节数。

慢开始

拥塞避免

快重传

快恢复

TCP 建立连接

image-20210304165229211

第 1 步,客户端发送一个 TCP 段,SYN 位设为 1 。一般随机挑选一个初始的 sequence number (client_isn),

第 2 步,当服务端收到包含 TCP SYN segment 的 IP datagram 时,服务端会提取出 TCP SYN segment。然后服务端向客户端发送 TCP segment, SYN 位置 1,ACK 置 1,acknowledgment number 设为 isn + 1,再挑选一个 server_isn 放在 sequence number。

第 3 步,当客户端收到 SYNACK segment 后,客户端再向服务端发送一个 TCP segment,ACK 置 1,acknowledgment number 设为 server_isn + 1,SYN 位置 0。带上发送到服务端的数据。

TCP 关闭连接

image-20210304165249120

客户端和服务端都可以断开连接。

第 1 步,客户端发送 TCP segment,FIN 置 1,

第 2 步,服务端收到后,ACK 置 1,

第 3 步,服务端再发送 segment,FIN 置 1,

第 4 步,客户端发送 segment,ACK 置 1,

ca9947b8e8924dd6a098fcb2743fa19a~tplv-k3u1fbpfcp-zoom-1.image

image-20210304165156990

UDP

QUIC

QUIC在许多方面可以被视为一种新型的可靠且安全的传输层协议,它适合为形似HTTP的协议提供服务,并且可以解决一些在基于TCP和TLS传输的HTTP/2协议中存在的缺点。

问题

网络条件比较好的情况下,TCP 的什么机制会影响传输速度?

TCP 报文段首部有 20 个字节,利用率较低。

慢启动

如何实现 UDP 的可靠传输?

UDP 是传输层协议,我们要实现 UDP 的可靠传输,就要在应用层模仿 TCP,为 UDP 加上那些机制。

添加 seq/ack,添加序列号和确认号,确保数据发送到对端。

添加发送和接收缓冲区,用于用户超时重传。

添加超时重传机制。

添加滑动窗口和拥塞窗口。

参考

JDK 1.8

源码注释

Hash table based implementation of the Map interface. This implementation provides all of the optional map operations, and permits null values and the null key. (The HashMap class is roughly equivalent to Hashtable, except that it is unsynchronized and permits nulls.) This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.

基于 Hash table 的 Map 接口的实现。HashMap 类基本上与 HashTable 相同,除了 HashMap 是异步的、允许 null 值。这个类不对 map 的顺序做保证,特别是不保证顺序会随着时间的改变而保持不变。

This implementation provides constant-time performance for the basic operations (get and put), assuming the hash function disperses the elements properly among the buckets. Iteration over collection views requires time proportional to the “capacity” of the HashMap instance (the number of buckets) plus its size (the number of key-value mappings). Thus, it’s very important not to set the initial capacity too high (or the load factor too low) if iteration performance is important.

这个实现提供基本操作( get 和 put )的常数时间,假设 hash 函数将元素合适地分配到 bucket 中。遍历集合需要和 HashMap 实例的容量( capacity )( bucket 的数量)+ TA 的大小( size )( key-value 映射的数量)成比例的时间。因此,如果遍历的性能很重要,不要设置太大的 capacity (或太小的 load factor)。

An instance of HashMap has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.

一个 HashMap 实例有两个参数影响 TA 的性能:初始容量(initial capavity)和负载因子(load factor)。initial capacity 是 hash table 创建时的 capacity。load factor 是在 capacity 自动增长前,允许 hash table 多满的一个量度。

As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost (reflected in most of the operations of the HashMap class, including get and put). The expected number of entries in the map and its load factor should be taken into account when setting its initial capacity, so as to minimize the number of rehash operations. If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.

作为一个普遍的规则,默认的 load factor (0.75)提供了一个在时间和空间消耗上的好的权衡。更高的值减少了空间的多余,但增加了查找消耗(反映在 HashMap 类的大多数操作上,包括 get 和 put)。期望的 map 中的 entry 的数量和 TA 的 load factor 应该在设置 initial capacity 纳入考虑,以尽可能减少重新 hash 操作的数量。如果 initial capacity 比被 load factor 分出的 entry 的最大数量还大,就不会发生 rehash 操作。

If many mappings are to be stored in a HashMap instance, creating it with a sufficiently large capacity will allow the mappings to be stored more efficiently than letting it perform automatic rehashing as needed to grow the table. Note that using many keys with the same hashCode() is a sure way to slow down performance of any hash table. To ameliorate impact, when keys are Comparable, this class may use comparison order among keys to help break ties.

如果很多映射要被存到一个 HashMap 实例中,用足够大的 capacity 将会让映射存得更高效,与让 HashMap 自动按需要 rehash 来增大 table 相比。注意:使用 hashCode() 相同的 key 一定会降低性能。为减少影响,当 key 是 Comparable 的,HashMap 可能会使用 key 之间的比较顺序来打破束缚。

Note that this implementation is not synchronized. If multiple threads access a hash map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally. (A structural modification is any operation that adds or deletes one or more mappings; merely changing the value associated with a key that an instance already contains is not a structural modification.) This is typically accomplished by synchronizing on some object that naturally encapsulates the map. If no such object exists, the map should be “wrapped” using the Collections.synchronizedMap method. This is best done at creation time, to prevent accidental unsynchronized access to the map:

1
Map m = Collections.synchronizedMap(new HashMap(...));

The iterators returned by all of this class’s “collection view methods” are fail-fast: if the map is structurally modified at any time after the iterator is created, in any way except through the iterator’s own remove method, the iterator will throw a ConcurrentModificationException. Thus, in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future.

Note that the fail-fast behavior of an iterator cannot be guaranteed as it is, generally speaking, impossible to make any hard guarantees in the presence of unsynchronized concurrent modification. Fail-fast iterators throw ConcurrentModificationException on a best-effort basis. Therefore, it would be wrong to write a program that depended on this exception for its correctness: the fail-fast behavior of iterators should be used only to detect bugs.

This class is a member of the Java Collections Framework.

field

serialVersionUID

序列化用。

DEFAULT_INITIAL_CAPACITY

默认初始容量 16。必须是 2 的幂。

MAXIMUM_CAPACITY

最大容量 1 << 30。

DEFAULT_LOAD_FACTOR

默认 load factor 0.75。

TREEIFY_THRESHOLD

红黑树化的阈值 8。必须比 2 大,应该至少是 8,以便与 UNTREEIFY_THRESHOLD 配合。

UNTREEIFY_THRESHOLD

红黑树变回链表的阈值 6。应该比 UNTREEIFY_THRESHOLD,必须至少是 6。

MIN_TREEIFY_CAPACITY

最小的树化的 capacity 64。至少 4 * TREEIFY_THRESHOLD 以避免 resize 时与 treeification threshold 的冲突。

table

桶。存放链表的头节点或树的根。

size

map 中 key-value 映射的数量。

modCount

被结构性修改的次数。

threshold

下一次扩容的阈值。 threshold = capacity * load factor。

loadFactor

负载因子。

构造函数

1
2
3
4
5
6
7
8
// 指定 initialCapacity 用于计算 threshold,指定 load factor
public HashMap(int initialCapacity, float loadFactor)
// 指定 initialCapacity,使用默认 load factor,调用上面一个方法
public HashMap(int initialCapacity) {
// 使用默认 load factor
public HashMap() {
// 根据一个已有的 map 创建,使用默认 load factor
public HashMap(Map<? extends K, ? extends V> m) {

put

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
/**
* Implements Map.put and related methods.
*
* @param hash hash for key
* @param key the key
* @param value the value to put
* @param onlyIfAbsent if true, don't change existing value
* @param evict if false, the table is in creation mode.
* @return previous value, or null if none
*/
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
// table 还未创建时
if ((tab = table) == null || (n = tab.length) == 0)
// 通过 resize() 创建 table,并将长度赋值给 n
n = (tab = resize()).length;
// 根据 i = (n - 1) & hash 得到位置,并赋值给 p
// 如果位置上是 null
if ((p = tab[i = (n - 1) & hash]) == null)
// 就直接创建一个新的结点,并赋值给该位置
tab[i] = newNode(hash, key, value, null);
// 如果数组位置上已经有元素
else {
Node<K,V> e; K k;
// 如果 key 是和已有元素的 key 相同,先把 Node 记录在 e 中
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
e = p;
// 如果 p 是 TreeNode
else if (p instanceof TreeNode)
// 新建 TreeNode 并记录在 e 中
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
// 如果 key 不同
else {
// 在链表上遍历
for (int binCount = 0; ; ++binCount) {
// 找到链表的结束
if ((e = p.next) == null) {
// 新建链表 Node 并连上去
p.next = newNode(hash, key, value, null);
// 如果在遍历的过程中,链表的长度达到了需要转换成红黑树的阈值时
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
// 转换成红黑树
treeifyBin(tab, hash);
break;
}
// 在遍历的过程中,找到了相同 key 的元素
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
if (e != null) { // existing mapping for key
V oldValue = e.value;
// 根据 onlyIfAbsent 是否覆盖
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
// 增加 modCount
++modCount;
// 根据 size 是否扩容
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
}

get

以 equals 为标准。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
/**
* Implements Map.get and related methods.
*
* @param hash hash for key
* @param key the key
* @return the node, or null if none
*/
final Node<K,V> getNode(int hash, Object key) {
Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
if ((tab = table) != null && (n = tab.length) > 0 &&
(first = tab[(n - 1) & hash]) != null) {
if (first.hash == hash && // always check first node
((k = first.key) == key || (key != null && key.equals(k))))
return first;
if ((e = first.next) != null) {
if (first instanceof TreeNode)
return ((TreeNode<K,V>)first).getTreeNode(hash, key);
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
} while ((e = e.next) != null);
}
}
return null;
}

基本思想是先根据 hash 找到对应的桶,接着检查第一个节点(first)的 key,如果 equals 就返回。否则,判断节点类型,如果是 TreeNode,就调用 first.getTreeNode(hash, key) 来查找;如果是链表,就向下遍历找到 equals 的。否则没找到返回 null。

resize

因为我们使用 2 的幂来扩容,每个 bin 里的元素或者待在相同的索引,或者移动到新 table 的索引的 2 的幂偏移的位置。

新 capacity 是旧 capacity 的两倍。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
/**
* Initializes or doubles table size. If null, allocates in
* accord with initial capacity target held in field threshold.
* Otherwise, because we are using power-of-two expansion, the
* elements from each bin must either stay at same index, or move
* with a power of two offset in the new table.
*
* @return the table
*/
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
// table 已经初始化
if (oldCap > 0) {
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
// 如果没超过最大容量
// newCap 变成两倍
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
// 下一次 resize 的阈值也设为两倍
newThr = oldThr << 1; // double threshold
}
// threshold > 0,且桶数组未被初始化
// 调用 HashMap(int) 和 HashMap(int, float) 构造方法时
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
// 桶数组未被初始化,且 threshold 为 0
// 调用 HashMap() 构造方法
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
// 第一个条件分支未计算 newThr 或嵌套分支在计算过程中导致 newThr 溢出归零
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
// 创建新的 table
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if (oldTab != null) {
// 拷贝旧的 table
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
// 如果链表只有一个节点,直接计算 hash 作为头节点放入对应的桶中
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
// 根据 e.hash & oldCap 是否等于 0 将原链表分为两部分,即分成两个链表
// 在 j 桶的位置的部分
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
// 在 j + oldCap 的部分
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
// 将链表头节点放入对应的桶
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}

split

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
/**
* Splits nodes in a tree bin into lower and upper tree bins,
* or untreeifies if now too small. Called only from resize;
* see above discussion about split bits and indices.
*
* @param map the map
* @param tab the table for recording bin heads
* @param index the index of the table being split
* @param bit the bit of hash to split on
*/
final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {
TreeNode<K,V> b = this;
// Relink into lo and hi lists, preserving order
// 将节点分为两个 list
TreeNode<K,V> loHead = null, loTail = null;
TreeNode<K,V> hiHead = null, hiTail = null;
int lc = 0, hc = 0;
for (TreeNode<K,V> e = b, next; e != null; e = next) {
next = (TreeNode<K,V>)e.next;
e.next = null;
// (e.hash & bit) == 0 划分根据
// bit 是 oldCap
if ((e.hash & bit) == 0) {
if ((e.prev = loTail) == null)
loHead = e;
else
loTail.next = e;
loTail = e;
// 记录节点数量
++lc;
}
else {
if ((e.prev = hiTail) == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
++hc;
}
}

if (loHead != null) {
// 根据节点数量是否 treeify
if (lc <= UNTREEIFY_THRESHOLD)
tab[index] = loHead.untreeify(map);
else {
tab[index] = loHead;
if (hiHead != null) // (else is already treeified)
loHead.treeify(tab);
// else hiHead == null 时,表明扩容后,
// 所有节点仍在原位置,树结构不变,无需重新树化
}
}
if (hiHead != null) {
if (hc <= UNTREEIFY_THRESHOLD)
tab[index + bit] = hiHead.untreeify(map);
else {
tab[index + bit] = hiHead;
if (loHead != null)
hiHead.treeify(tab);
}
}
}

treeifyBin

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
/**
* Replaces all linked nodes in bin at index for given hash unless
* table is too small, in which case resizes instead.
*/
final void treeifyBin(Node<K,V>[] tab, int hash) {
int n, index; Node<K,V> e;
// 当 capacity < MIN_TREEIFY_CAPACITY 时,进行 resize
if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
resize();
else if ((e = tab[index = (n - 1) & hash]) != null) {
TreeNode<K,V> hd = null, tl = null;
do {
// 将普通节点替换成树形节点
TreeNode<K,V> p = replacementTreeNode(e, null);
// 将普通链表转成由树形节点链表
if (tl == null)
hd = p;
else {
p.prev = tl;
tl.next = p;
}
tl = p;
} while ((e = e.next) != null);
if ((tab[index] = hd) != null)
// 将树形链表转换成红黑树
hd.treeify(tab);
}
}

问题

被 transient 所修饰 table 变量

如果大家细心阅读 HashMap 的源码,会发现桶数组 table 被申明为 transient。transient 表示易变的意思,在 Java 中,被该关键字修饰的变量不会被默认的序列化机制序列化。我们再回到源码中,考虑一个问题:桶数组 table 是 HashMap 底层重要的数据结构,不序列化的话,别人还怎么还原呢?

这里简单说明一下吧,HashMap 并没有使用默认的序列化机制,而是通过实现readObject/writeObject两个方法自定义了序列化的内容。这样做是有原因的,试问一句,HashMap 中存储的内容是什么?不用说,大家也知道是键值对。所以只要我们把键值对序列化了,我们就可以根据键值对数据重建 HashMap。有的朋友可能会想,序列化 table 不是可以一步到位,后面直接还原不就行了吗?这样一想,倒也是合理。但序列化 talbe 存在着两个问题:

  1. table 多数情况下是无法被存满的,序列化未使用的部分,浪费空间
  2. 同一个键值对在不同 JVM 下,所处的桶位置可能是不同的,在不同的 JVM 下反序列化 table 可能会发生错误。

以上两个问题中,第一个问题比较好理解,第二个问题解释一下。HashMap 的get/put/remove等方法第一步就是根据 hash 找到键所在的桶位置,但如果键没有覆写 hashCode 方法,计算 hash 时最终调用 Object 中的 hashCode 方法。但 Object 中的 hashCode 方法是 native 型的,不同的 JVM 下,可能会有不同的实现,产生的 hash 可能也是不一样的。也就是说同一个键在不同平台下可能会产生不同的 hash,此时再对在同一个 table 继续操作,就会出现问题。

综上所述,大家应该能明白 HashMap 不序列化 table 的原因了。

HashMap 链表插入方式 → 头插为何改成尾插 ?

HashMap 是有序的吗?

HashMap 是无序的,LinkedHashMap 是有序的。

参考

源码

HashMap 源码详细分析(JDK1.8) | 田小波的技术博客

HashMap源码分析 · Leo’s Studio

HashMap 链表插入方式 → 头插为何改成尾插 ? - 青石路 - 博客园

安装

Arch Linux 上:

1
sudo pacman -S mariadb mariadb-libs

安装 mariadb 软件包之后,你必须在启动 mariadb.service 之前运行下面这条命令:

1
sudo mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql

注意: 出于安全考虑,systemd 的 .service 文件设置了 ProtectHome=true 来禁止 MariaDB 访问 /home、/root 和 /run/user 目录内的文件。datadir 必须要放在以上文件夹之外,并且由 mysql 用户和用户组 所有。 如果要改变这个设置,你可以根据以下链接创建一个替代的 service 文件:[2]
然后 enable 或者 start mariadb.service。

提示: 如果数据目录使用的不是 /var/lib/mysql,需要在 /etc/my.cnf.d/server.cnf 文件的 [mysqld] 部分设置 datadir=<数据目录>
用下面这个命令启动数据库级别的安全配置助手,来配置一些必要的安全选项:

1
sudo mysql_secure_installation

连接

1
mysql -u root -p

显示所有的数据库:

show databases;

use uno;

show tables;

select * from names;

1
2
3
4
5
6
7
8
SELECT 
select_list
FROM
table_name
ORDER BY
column1 [ASC|DESC],
column2 [ASC|DESC],
...;

MySQL

用户

添加新用户

先用 root 登录数据库
sudo mysql -p -u root

CREATE USER ‘dbcourse’@’localhost’ IDENTIFIED BY ‘some_pass’;

授予权限

授予 dbcourse 用户全部操作权限
GRANT ALL PRIVILEGES ON test TO ‘dbcourse’@’localhost’;

刷新权限

FLUSH PRIVILEGES;

WHERE

报错

报错:

1
2
MariaDB [test]> INSERT INTO S VALUES('S1', '精益', '20', '天津');
ERROR 1366 (22007): Incorrect string value: '\xE7\xB2\xBE\xE7\x9B\x8A' for column `test`.`S`.`SNAME` at row 1

解决:
https://stackoverflow.com/questions/1168036/how-to-fix-incorrect-string-value-errors
连接数据库后:

1
2
SET NAMES 'utf8';
SET CHARACTER SET utf8;

另:检查数据库设置:

1
2
mysql> show variables like '%colla%';
mysql> show variables like '%charac%';

dp

Density-independent Pixels 密度无关像素

一个基于屏幕物理密度的抽象的单位。

TA 与 屏幕物理密度/像素密度 相关,对应不同的 density

阅读全文 »

要绘制东西,需要 4 个基本组件:存储像素点的 Bitmap、主持绘制的 Canvas、绘图的基本元素(Path,Rect,text,Bitmap等)、描述颜色和样式的 Paint。

阅读全文 »

Git

简介

Git 是一个免费的,开源的分布式版本控制系统,被设计用来又快又好地处理大大小小的项目。

阅读全文 »

自定义 View 有两种方法:

继承 View 的子类,如TextView

直接继承 View 类。需要自己绘制 UI 元素。

阅读全文 »

CSS

CSS 用于指示 HTML 在浏览器中的显示样式。

CSS 组成

CSS 由选择符(选择器)与声明组成,声明由属性和值组成。

1
2
3
p {
color: blue;
}
阅读全文 »

HTML

sample

1
2
3
4
5
6
7
8
9
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>制作我的第一个网页</title>
</head>
<body>
<h1>Hello HTML</h1>
</body>
</html>
阅读全文 »

简介

HTTP (Hypertext Transfer Protocol 超文本传输协议)是一种用于分布式、协作式和超媒体信息系统的应用层协议。HTTP 协议定义了客户端如何从服务器请求 Web 页面,以及服务器如何把 Web 页面返回给客户端。HTTP 是万维网数据通信的基础。

阅读全文 »

Service是一种可以在后台执行长时间运行操作而不提供界面的应用组件。

阅读全文 »

Android 构建时会编译源代码和应用资源,然后打包为APK文件。Android Studio 使用 Gradle 来自动执行和管理构建流程。Android Plugin for Gradle 与 Gradle 搭配使用,专门为 Android 应用的构建提供支持。

阅读全文 »

启用 Data Binding

Data Binding 库与 Android Gradle 插件捆绑在一起,所以不需要声明依赖,但需要启用。

阅读全文 »

概述

数字系统

可以用不同数字反映的物理量称为数字量。

表示数字量的信号称为数字信号。

处理数字信号的电子电路称为数字电路(数字逻辑电路、逻辑电路),其功能通过逻辑运算和逻辑判断完成。

阅读全文 »

前言

Activity的生命周期是Android中的基础。在用户使用App的过程中,Activity在各个状态中不停转换。

阅读全文 »

写在前面

make这一阶段花费时间和机器配置有关,可能长达几小时,建议选择好时间。同时需要足够的磁盘空间,22G以上。

环境:Ubuntu 18.04.1 阿里云服务器

阅读全文 »

Java Object 类的源码中 hashCode() 的注释写道:

  • 在Java程序运行中,不论调用多少次hashCode(),总是返回相同的整数,前提是没有修改equals()方法中用到的用来进行比较的值。
  • 在这次程序运行和下次程序运行中hashCode()返回的值不要求相同。
  • 如果两个对象根据equals方法是相等的,那么调用hashCode()方法也要返回相同的整数值。
  • 当两个对象根据equals方法是不相等的,不要求hashCode()方法返回不同的整数值。但返回不同的整数值可以提高hash table的效率。
    • 出于实用角度,Object类中的hashCode方法为不同对象返回不同整数值。(可能是根据对象的内存地址进行函数变换得到。)

equals方法的注释同样有很多信息:

equals()是用来实现一个非null引用相等关系的。

equals()的性质:

  • 自反性 reflexive

    对于非null引用值x,x.equals(x)总是返回true

  • 对称性 symmetric

    对于非null引用值x、y,x.equals(y)返回true当且仅当y.equals(x)返回true

  • 传递性 transitive

    对于非null引用值x、y、z,若x.equals(y)返回true且y.equals(z)返回true,则x.equals(z)应返回true

  • 一致性

    对于非null引用值x、y,对于x.equals(y),多次调用返回值应是相同的,前提是equal()中参与判断的依据值没改变

  • 非null性

    对于任何非null引用值x,x.equals(null)总应返回false

Object类的equals()方法实现了区分两个非null引用值x、y的方法,当且仅当x、y指向同一个对象,即x==y时返回true。

注意:一般重写了equals()方法就要重写hashCode()方法,进而来维持hashCode()的规范,hashCode()保证equal的对象有相同的hash code。


2019-11-9 14:36:34

更多:面试官爱问的equals与hashCode

前言

注解在Java SE5中引入。注解好处多多,TA可以使我们的代码变得简洁,减少样板代码,还能存储有关程序的额外信息。程序员还可以自定义注解

阅读全文 »

前言

这是打算认真写博客后的第一篇,今天来研究一下Java的反射机制。

反射简介

反射机制可以让人们在运行时获得类的信息。

Class类和java.lang.reflect类库(包含Constructor Method Field Modifier Array类)为反射提供了支持。

阅读全文 »